2025-09-07T07:33:48.3802710Z Current runner version: '2.328.0' 2025-09-07T07:33:48.3807993Z Runner name: 'i-06b49f47ba3e131d7' 2025-09-07T07:33:48.3808669Z Runner group name: 'default' 2025-09-07T07:33:48.3809466Z Machine name: 'ip-10-0-56-51' 2025-09-07T07:33:48.3811673Z ##[group]GITHUB_TOKEN Permissions 2025-09-07T07:33:48.3813919Z Contents: read 2025-09-07T07:33:48.3814364Z Metadata: read 2025-09-07T07:33:48.3814771Z ##[endgroup] 2025-09-07T07:33:48.3816818Z Secret source: Actions 2025-09-07T07:33:48.3817479Z Prepare workflow directory 2025-09-07T07:33:48.4215073Z Prepare all required actions 2025-09-07T07:33:48.4247993Z Getting action download info 2025-09-07T07:33:48.6800438Z Download action repository 'pytorch/test-infra@main' (SHA:548a4bc624d43a01cdf165a63b041f0ae014ddbd) 2025-09-07T07:33:50.7888242Z Download action repository 'pytorch/pytorch@main' (SHA:93fb23d6fae7c4e82c4239a1033e522088742634) 2025-09-07T07:34:06.0837067Z Download action repository 'actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065' (SHA:a26af69be951a213d495a4c3e4e4022e16d87065) 2025-09-07T07:34:06.4445827Z Download action repository 'aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722' (SHA:ececac1a45f3b08a01d2dd070d28d111c5fe6722) 2025-09-07T07:34:06.6809536Z Download action repository 'aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076' (SHA:062b18b96a7aff071d4dc91bc00c4c1a7945b076) 2025-09-07T07:34:06.8546918Z Download action repository 'seemethere/upload-artifact-s3@baba72d0712b404f646cebe0730933554ebce96a' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-09-07T07:34:07.1288651Z Getting action download info 2025-09-07T07:34:07.2393703Z Download action repository 'actions/checkout@v4' (SHA:08eba0b27e820071cde6df949e0beb9ba4906955) 2025-09-07T07:34:07.4683269Z Getting action download info 2025-09-07T07:34:07.5605398Z Download action repository 'nick-fields/retry@v3.0.0' (SHA:7152eba30c6575329ac0576536151aca5a72780e) 2025-09-07T07:34:07.7609696Z Getting action download info 2025-09-07T07:34:07.8710021Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2025-09-07T07:34:08.0572350Z Getting action download info 2025-09-07T07:34:08.1795710Z Uses: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/heads/main (93fb23d6fae7c4e82c4239a1033e522088742634) 2025-09-07T07:34:08.1799059Z ##[group] Inputs 2025-09-07T07:34:08.1799342Z build-environment: linux-jammy-py3.9-gcc11-build 2025-09-07T07:34:08.1800818Z test-matrix: {"include": [{"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]} 2025-09-07T07:34:08.1802564Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:34:08.1803117Z sync-tag: 2025-09-07T07:34:08.1803737Z timeout-minutes: 720 2025-09-07T07:34:08.1803927Z use-gha: 2025-09-07T07:34:08.1804102Z dashboard-tag: 2025-09-07T07:34:08.1804298Z s3-bucket: gha-artifacts 2025-09-07T07:34:08.1804491Z aws-role-to-assume: 2025-09-07T07:34:08.1804869Z disable-monitor: false 2025-09-07T07:34:08.1805081Z monitor-log-interval: 5 2025-09-07T07:34:08.1805316Z monitor-data-collect-interval: 1 2025-09-07T07:34:08.1805550Z ##[endgroup] 2025-09-07T07:34:08.1806220Z Complete job name: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:34:08.2267528Z A job started hook has been configured by the self-hosted runner administrator 2025-09-07T07:34:08.2348421Z ##[group]Run '/home/ec2-user/runner-scripts/before_job.sh' 2025-09-07T07:34:08.2355490Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:34:08.2355942Z ##[endgroup] 2025-09-07T07:34:09.1818371Z Runner Type: linux.8xlarge.amx 2025-09-07T07:34:09.1819057Z Instance Type: m7i-flex.8xlarge 2025-09-07T07:34:09.1819320Z AMI Name: unknown 2025-09-07T07:34:09.1855477Z AMI ID: ami-05ffe3c48a9991133 2025-09-07T07:34:13.4565079Z ##[group]Run pytorch/test-infra/.github/actions/setup-ssh@main 2025-09-07T07:34:13.4565513Z with: 2025-09-07T07:34:13.4566164Z github-secret: *** 2025-09-07T07:34:13.4566864Z instructions: All testing is done inside the container, to start an interactive session run: docker exec -it $(docker container ps --format '{{.ID}}') bash 2025-09-07T07:34:13.4567459Z activate-with-label: false 2025-09-07T07:34:13.4567713Z label: with-ssh 2025-09-07T07:34:13.4567973Z remove-existing-keys: true 2025-09-07T07:34:13.4568276Z fail-silently: true 2025-09-07T07:34:13.4568529Z env: 2025-09-07T07:34:13.4568767Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:34:13.4569018Z ##[endgroup] 2025-09-07T07:34:13.5785767Z Please see https://github.com/pytorch/pytorch/wiki/Debugging-using-with-ssh-for-Github-Actions for more info. 2025-09-07T07:34:13.5786723Z Not on pull request and ciflow reference could not be extracted, skipping adding ssh keys 2025-09-07T07:34:13.6035568Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@main 2025-09-07T07:34:13.6035881Z with: 2025-09-07T07:34:13.6036055Z no-sudo: true 2025-09-07T07:34:13.6036244Z submodules: recursive 2025-09-07T07:34:13.6036427Z fetch-depth: 0 2025-09-07T07:34:13.6036592Z env: 2025-09-07T07:34:13.6036751Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:34:13.6036936Z ##[endgroup] 2025-09-07T07:34:13.6097772Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-07T07:34:13.6098376Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-07T07:34:13.6105884Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:34:13.6106134Z env: 2025-09-07T07:34:13.6106302Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:34:13.6106489Z ##[endgroup] 2025-09-07T07:34:13.6180984Z ##[group]Run # Use all available CPUs for fetching 2025-09-07T07:34:13.6181316Z # Use all available CPUs for fetching 2025-09-07T07:34:13.6181589Z cd "${GITHUB_WORKSPACE}" 2025-09-07T07:34:13.6181866Z git config --global fetch.parallel 0 2025-09-07T07:34:13.6182124Z git config --global submodule.fetchJobs 0 2025-09-07T07:34:13.6182344Z  2025-09-07T07:34:13.6182589Z # Clean workspace. The default checkout action should also do this, but 2025-09-07T07:34:13.6182887Z # do it here as well just in case 2025-09-07T07:34:13.6183116Z if [[ -d .git ]]; then 2025-09-07T07:34:13.6183323Z  if [ -z "${NO_SUDO}" ]; then 2025-09-07T07:34:13.6183527Z  sudo git clean -ffdx 2025-09-07T07:34:13.6183723Z  else 2025-09-07T07:34:13.6183893Z  git clean -ffdx 2025-09-07T07:34:13.6184077Z  fi 2025-09-07T07:34:13.6184226Z fi 2025-09-07T07:34:13.6189228Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:34:13.6189472Z env: 2025-09-07T07:34:13.6189710Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:34:13.6189910Z NO_SUDO: true 2025-09-07T07:34:13.6190065Z ##[endgroup] 2025-09-07T07:34:13.6292812Z ##[group]Run actions/checkout@v4 2025-09-07T07:34:13.6293060Z with: 2025-09-07T07:34:13.6293272Z ref: 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:34:13.6293513Z fetch-depth: 0 2025-09-07T07:34:13.6293711Z submodules: recursive 2025-09-07T07:34:13.6293919Z show-progress: false 2025-09-07T07:34:13.6294297Z repository: pytorch/pytorch 2025-09-07T07:34:13.6294673Z token: *** 2025-09-07T07:34:13.6294835Z ssh-strict: true 2025-09-07T07:34:13.6294998Z ssh-user: git 2025-09-07T07:34:13.6295169Z persist-credentials: true 2025-09-07T07:34:13.6295357Z clean: true 2025-09-07T07:34:13.6295553Z sparse-checkout-cone-mode: true 2025-09-07T07:34:13.6295764Z fetch-tags: false 2025-09-07T07:34:13.6295931Z lfs: false 2025-09-07T07:34:13.6296091Z set-safe-directory: true 2025-09-07T07:34:13.6296287Z env: 2025-09-07T07:34:13.6296452Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:34:13.6296644Z ##[endgroup] 2025-09-07T07:34:13.7206789Z Syncing repository: pytorch/pytorch 2025-09-07T07:34:13.7207987Z ##[group]Getting Git version info 2025-09-07T07:34:13.7208323Z Working directory is '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-09-07T07:34:13.7208811Z [command]/usr/bin/git version 2025-09-07T07:34:13.7463015Z git version 2.47.1 2025-09-07T07:34:13.7480291Z ##[endgroup] 2025-09-07T07:34:13.7495933Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/8c3f63c3-9cdb-447f-a8ac-f6214bd84155/.gitconfig' 2025-09-07T07:34:13.7530545Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/8c3f63c3-9cdb-447f-a8ac-f6214bd84155' before making global git config changes 2025-09-07T07:34:13.7531335Z Adding repository directory to the temporary git global config as a safe directory 2025-09-07T07:34:13.7542943Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-09-07T07:34:13.7600450Z Deleting the contents of '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2025-09-07T07:34:13.7602596Z ##[group]Initializing the repository 2025-09-07T07:34:13.7611618Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-09-07T07:34:13.7672885Z hint: Using 'master' as the name for the initial branch. This default branch name 2025-09-07T07:34:13.7673536Z hint: is subject to change. To configure the initial branch name to use in all 2025-09-07T07:34:13.7674091Z hint: of your new repositories, which will suppress this warning, call: 2025-09-07T07:34:13.7674368Z hint: 2025-09-07T07:34:13.7675167Z hint: git config --global init.defaultBranch 2025-09-07T07:34:13.7675492Z hint: 2025-09-07T07:34:13.7675764Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2025-09-07T07:34:13.7676205Z hint: 'development'. The just-created branch can be renamed via this command: 2025-09-07T07:34:13.7676481Z hint: 2025-09-07T07:34:13.7676652Z hint: git branch -m 2025-09-07T07:34:13.7697738Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/ 2025-09-07T07:34:13.7709705Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2025-09-07T07:34:13.7755118Z ##[endgroup] 2025-09-07T07:34:13.7755560Z ##[group]Disabling automatic garbage collection 2025-09-07T07:34:13.7764026Z [command]/usr/bin/git config --local gc.auto 0 2025-09-07T07:34:13.7789848Z ##[endgroup] 2025-09-07T07:34:13.7790220Z ##[group]Setting up auth 2025-09-07T07:34:13.7797379Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-09-07T07:34:13.7823767Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-09-07T07:34:13.8201048Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-09-07T07:34:13.8232993Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-09-07T07:34:13.8557418Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-09-07T07:34:13.8631102Z ##[endgroup] 2025-09-07T07:34:13.8631469Z ##[group]Fetching the repository 2025-09-07T07:34:13.8635708Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2025-09-07T07:34:58.1026066Z From https://github.com/pytorch/pytorch 2025-09-07T07:34:58.1026835Z * [new branch] 160583 -> origin/160583 2025-09-07T07:34:58.1032680Z * [new branch] 2.6.0.dev20241004+ -> origin/2.6.0.dev20241004+ 2025-09-07T07:34:58.1038646Z * [new branch] 5addvllmbuild -> origin/5addvllmbuild 2025-09-07T07:34:58.1039248Z * [new branch] AaronWang04_addmmfusion_perftest -> origin/AaronWang04_addmmfusion_perftest 2025-09-07T07:34:58.1039717Z * [new branch] HDCharles-2.6.0-release-notes -> origin/HDCharles-2.6.0-release-notes 2025-09-07T07:34:58.1040112Z * [new branch] ISSUE-154849 -> origin/ISSUE-154849 2025-09-07T07:34:58.1040558Z * [new branch] JackCaoG/dynamo_make_fx_non_core_aten_ops -> origin/JackCaoG/dynamo_make_fx_non_core_aten_ops 2025-09-07T07:34:58.1041004Z * [new branch] NicoshevSVE128 -> origin/NicoshevSVE128 2025-09-07T07:34:58.1041395Z * [new branch] PR-AOTInductorNoneBug -> origin/PR-AOTInductorNoneBug 2025-09-07T07:34:58.1041825Z * [new branch] PR-AOTInductorNoneBugFix -> origin/PR-AOTInductorNoneBugFix 2025-09-07T07:34:58.1042563Z * [new branch] PR-FixConfigsIssue -> origin/PR-FixConfigsIssue 2025-09-07T07:34:58.1042963Z * [new branch] PR-NoneBugFix-viable -> origin/PR-NoneBugFix-viable 2025-09-07T07:34:58.1043335Z * [new branch] PR-ResetToZero -> origin/PR-ResetToZero 2025-09-07T07:34:58.1043703Z * [new branch] Update-Flash-Packaging -> origin/Update-Flash-Packaging 2025-09-07T07:34:58.1044059Z * [new branch] VLA_exp -> origin/VLA_exp 2025-09-07T07:34:58.1044447Z * [new branch] actually-run-mps-aot-inductor -> origin/actually-run-mps-aot-inductor 2025-09-07T07:34:58.1044932Z * [new branch] add-missing-args-normalization -> origin/add-missing-args-normalization 2025-09-07T07:34:58.1045581Z * [new branch] add-user-guide-structure -> origin/add-user-guide-structure 2025-09-07T07:34:58.1046004Z * [new branch] add-vllm-nightly-build -> origin/add-vllm-nightly-build 2025-09-07T07:34:58.1046402Z * [new branch] add_compile_benchmarking -> origin/add_compile_benchmarking 2025-09-07T07:34:58.1047011Z * [new branch] addmm-heuristic -> origin/addmm-heuristic 2025-09-07T07:34:58.1047368Z * [new branch] addsimde -> origin/addsimde 2025-09-07T07:34:58.1047706Z * [new branch] addvllmtest -> origin/addvllmtest 2025-09-07T07:34:58.1048085Z * [new branch] adi/acl_upgrade -> origin/adi/acl_upgrade 2025-09-07T07:34:58.1048421Z * [new branch] adi/test -> origin/adi/test 2025-09-07T07:34:58.1048711Z * [new branch] adi/test_bgemm -> origin/adi/test_bgemm 2025-09-07T07:34:58.1049038Z * [new branch] adi/test_fusions -> origin/adi/test_fusions 2025-09-07T07:34:58.1049420Z * [new branch] adi/test_onednn_v3.9 -> origin/adi/test_onednn_v3.9 2025-09-07T07:34:58.1049869Z * [new branch] adi/test_presve_change -> origin/adi/test_presve_change 2025-09-07T07:34:58.1050211Z * [new branch] adi/test_timm -> origin/adi/test_timm 2025-09-07T07:34:58.1050564Z * [new branch] adi/testpresve_change -> origin/adi/testpresve_change 2025-09-07T07:34:58.1050943Z * [new branch] aditew01/test/vec_bf16 -> origin/aditew01/test/vec_bf16 2025-09-07T07:34:58.1051348Z * [new branch] ah-globalfeedback-hook -> origin/ah-globalfeedback-hook 2025-09-07T07:34:58.1051845Z * [new branch] alt-disable -> origin/alt-disable 2025-09-07T07:34:58.1052235Z * [new branch] angelayi/aoti_additional_files -> origin/angelayi/aoti_additional_files 2025-09-07T07:34:58.1052668Z * [new branch] angelayi/aoti_inductor_fx -> origin/angelayi/aoti_inductor_fx 2025-09-07T07:34:58.1053073Z * [new branch] angelayi/benchmark -> origin/angelayi/benchmark 2025-09-07T07:34:58.1053456Z * [new branch] angelayi/benchmark2 -> origin/angelayi/benchmark2 2025-09-07T07:34:58.1053894Z * [new branch] angelayi/change_pytree_serialization -> origin/angelayi/change_pytree_serialization 2025-09-07T07:34:58.1054323Z * [new branch] angelayi/cpp_loader -> origin/angelayi/cpp_loader 2025-09-07T07:34:58.1054719Z * [new branch] angelayi/custom_op_subgraph -> origin/angelayi/custom_op_subgraph 2025-09-07T07:34:58.1055110Z * [new branch] angelayi/customop -> origin/angelayi/customop 2025-09-07T07:34:58.1055493Z * [new branch] angelayi/fake_cache_empty -> origin/angelayi/fake_cache_empty 2025-09-07T07:34:58.1056104Z * [new branch] angelayi/is_symbolic_tracing -> origin/angelayi/is_symbolic_tracing 2025-09-07T07:34:58.1061610Z * [new branch] angelayi/item -> origin/angelayi/item 2025-09-07T07:34:58.1062454Z * [new branch] angelayi/no_so_weight -> origin/angelayi/no_so_weight 2025-09-07T07:34:58.1063555Z * [new branch] angelayi/opoverload -> origin/angelayi/opoverload 2025-09-07T07:34:58.1063995Z * [new branch] angelayi/pattern -> origin/angelayi/pattern 2025-09-07T07:34:58.1064489Z * [new branch] angelayi/pytree -> origin/angelayi/pytree 2025-09-07T07:34:58.1064881Z * [new branch] angelayi/scan_layers -> origin/angelayi/scan_layers 2025-09-07T07:34:58.1065367Z * [new branch] angelayi/symint_input -> origin/angelayi/symint_input 2025-09-07T07:34:58.1065747Z * [new branch] angelayi/test_cpp -> origin/angelayi/test_cpp 2025-09-07T07:34:58.1066209Z * [new branch] angelayi/torch_size -> origin/angelayi/torch_size 2025-09-07T07:34:58.1067074Z * [new branch] aoti-cuda-alloc -> origin/aoti-cuda-alloc 2025-09-07T07:34:58.1067509Z * [new branch] aoti_target_windows -> origin/aoti_target_windows 2025-09-07T07:34:58.1067863Z * [new branch] aoti_weight_sharing -> origin/aoti_weight_sharing 2025-09-07T07:34:58.1068265Z * [new branch] atalman-inductor-perf-cu124 -> origin/atalman-inductor-perf-cu124 2025-09-07T07:34:58.1068745Z * [new branch] atalman-inductor-perf-cu124.1 -> origin/atalman-inductor-perf-cu124.1 2025-09-07T07:34:58.1069147Z * [new branch] atalman-patch-1 -> origin/atalman-patch-1 2025-09-07T07:34:58.1069477Z * [new branch] atalman-patch-3 -> origin/atalman-patch-3 2025-09-07T07:34:58.1069806Z * [new branch] atalman-patch-4 -> origin/atalman-patch-4 2025-09-07T07:34:58.1070123Z * [new branch] atalman-patch-5 -> origin/atalman-patch-5 2025-09-07T07:34:58.1070464Z * [new branch] atalman-patch-6 -> origin/atalman-patch-6 2025-09-07T07:34:58.1070814Z * [new branch] atalman_inductor_2.3.0 -> origin/atalman_inductor_2.3.0 2025-09-07T07:34:58.1071225Z * [new branch] atalman_inductor_2.3.1 -> origin/atalman_inductor_2.3.1 2025-09-07T07:34:58.1071573Z * [new branch] atalman_inductor_2.4.0 -> origin/atalman_inductor_2.4.0 2025-09-07T07:34:58.1071919Z * [new branch] atalman_inductor_2.4.x -> origin/atalman_inductor_2.4.x 2025-09-07T07:34:58.1072342Z * [new branch] autoupdate-transformers-pin-via-pr -> origin/autoupdate-transformers-pin-via-pr 2025-09-07T07:34:58.1072952Z * [new branch] bahuang/dtensor_demo -> origin/bahuang/dtensor_demo 2025-09-07T07:34:58.1073294Z * [new branch] bahuang/test -> origin/bahuang/test 2025-09-07T07:34:58.1073837Z * [new branch] base/1.5 -> origin/base/1.5 2025-09-07T07:34:58.1074369Z * [new branch] batching_sdpa_efficient_attention -> origin/batching_sdpa_efficient_attention 2025-09-07T07:34:58.1075262Z * [new branch] bc-lint-config -> origin/bc-lint-config 2025-09-07T07:34:58.1075652Z * [new branch] bc-lint-test-new-config -> origin/bc-lint-test-new-config 2025-09-07T07:34:58.1076008Z * [new branch] benchmark-updates -> origin/benchmark-updates 2025-09-07T07:34:58.1076391Z * [new branch] benchmarker_compat_with_do_bench -> origin/benchmarker_compat_with_do_bench 2025-09-07T07:34:58.1076776Z * [new branch] benchmarking-script -> origin/benchmarking-script 2025-09-07T07:34:58.1082353Z * [new branch] bertmaher/pinbump26 -> origin/bertmaher/pinbump26 2025-09-07T07:34:58.1082781Z * [new branch] bertrand/cutlass -> origin/bertrand/cutlass 2025-09-07T07:34:58.1083138Z * [new branch] bf/cg-custom-wrapper -> origin/bf/cg-custom-wrapper 2025-09-07T07:34:58.1083651Z * [new branch] bf/cg-or-error -> origin/bf/cg-or-error 2025-09-07T07:34:58.1083987Z * [new branch] bf/cg-remove-check -> origin/bf/cg-remove-check 2025-09-07T07:34:58.1084321Z * [new branch] bf/cg-skip-1-kernel -> origin/bf/cg-skip-1-kernel 2025-09-07T07:34:58.1084647Z * [new branch] bf/cudagraph -> origin/bf/cudagraph 2025-09-07T07:34:58.1085053Z * [new branch] bf/cudagraph-disable-input-mutation -> origin/bf/cudagraph-disable-input-mutation 2025-09-07T07:34:58.1085689Z * [new branch] bf/cudagraph-enable-input-mutation-support-benchmark -> origin/bf/cudagraph-enable-input-mutation-support-benchmark 2025-09-07T07:34:58.1086264Z * [new branch] bf/cudagraph-partition -> origin/bf/cudagraph-partition 2025-09-07T07:34:58.1086874Z * [new branch] bf/default-recompile-reason -> origin/bf/default-recompile-reason 2025-09-07T07:34:58.1087371Z * [new branch] bf/donated-buffer-bench -> origin/bf/donated-buffer-bench 2025-09-07T07:34:58.1087762Z * [new branch] bf/exp -> origin/bf/exp 2025-09-07T07:34:58.1088109Z * [new branch] bf/pa-non-divisible -> origin/bf/pa-non-divisible 2025-09-07T07:34:58.1088534Z * [new branch] bf/partition-move-cpu -> origin/bf/partition-move-cpu 2025-09-07T07:34:58.1088897Z * [new branch] bf/partition-turn-on -> origin/bf/partition-turn-on 2025-09-07T07:34:58.1089270Z * [new branch] bf/remove-check-55b0c39d -> origin/bf/remove-check-55b0c39d 2025-09-07T07:34:58.1089609Z * [new branch] bf/rope -> origin/bf/rope 2025-09-07T07:34:58.1090185Z * [new branch] bisect_perf_hf_T5_3acc6eac492 -> origin/bisect_perf_hf_T5_3acc6eac492 2025-09-07T07:34:58.1090700Z * [new branch] bisect_perf_hf_T5_3fcf66f61fb -> origin/bisect_perf_hf_T5_3fcf66f61fb 2025-09-07T07:34:58.1091114Z * [new branch] bisect_perf_hf_T5_4009d154129 -> origin/bisect_perf_hf_T5_4009d154129 2025-09-07T07:34:58.1091475Z * [new branch] bisect_perf_hf_T5_40d0740e73d -> origin/bisect_perf_hf_T5_40d0740e73d 2025-09-07T07:34:58.1091846Z * [new branch] bisect_perf_hf_T5_5268754e -> origin/bisect_perf_hf_T5_5268754e 2025-09-07T07:34:58.1092208Z * [new branch] bisect_perf_hf_T5_7d89a8d385c -> origin/bisect_perf_hf_T5_7d89a8d385c 2025-09-07T07:34:58.1092620Z * [new branch] bisect_perf_hf_T5_b7a25c1ee7c -> origin/bisect_perf_hf_T5_b7a25c1ee7c 2025-09-07T07:34:58.1093063Z * [new branch] bisect_perf_hf_T5_c25b201583f -> origin/bisect_perf_hf_T5_c25b201583f 2025-09-07T07:34:58.1093443Z * [new branch] bisect_perf_hf_T5_c93e57efac0 -> origin/bisect_perf_hf_T5_c93e57efac0 2025-09-07T07:34:58.1093883Z * [new branch] bisect_perf_hf_T5_ca9813ea149 -> origin/bisect_perf_hf_T5_ca9813ea149 2025-09-07T07:34:58.1094396Z * [new branch] bisect_perf_hf_T5_d65f194a -> origin/bisect_perf_hf_T5_d65f194a 2025-09-07T07:34:58.1095127Z * [new branch] bisect_perf_hf_T5_da94ab0b -> origin/bisect_perf_hf_T5_da94ab0b 2025-09-07T07:34:58.1095922Z * [new branch] bisect_perf_hf_T5_da94ab0b_new -> origin/bisect_perf_hf_T5_da94ab0b_new 2025-09-07T07:34:58.1096887Z * [new branch] bisect_perf_hf_T5_db4e8a1d8a8 -> origin/bisect_perf_hf_T5_db4e8a1d8a8 2025-09-07T07:34:58.1097805Z * [new branch] bisect_perf_hf_T5_e0d97e936a2 -> origin/bisect_perf_hf_T5_e0d97e936a2 2025-09-07T07:34:58.1098252Z * [new branch] bisect_perf_hf_T5_f23621ec563 -> origin/bisect_perf_hf_T5_f23621ec563 2025-09-07T07:34:58.1098688Z * [new branch] bowbao/bench_updates_stage -> origin/bowbao/bench_updates_stage 2025-09-07T07:34:58.1099241Z * [new branch] bowbao/dort_rewriter -> origin/bowbao/dort_rewriter 2025-09-07T07:34:58.1100031Z * [new branch] bowbao/wip_prs -> origin/bowbao/wip_prs 2025-09-07T07:34:58.1101013Z * [new branch] brister/break_tensorbox -> origin/brister/break_tensorbox 2025-09-07T07:34:58.1101618Z * [new branch] brister/custom_fx_backend -> origin/brister/custom_fx_backend 2025-09-07T07:34:58.1102233Z * [new branch] brister/fx_custom_triton -> origin/brister/fx_custom_triton 2025-09-07T07:34:58.1102860Z * [new branch] brister/tensor_box_output -> origin/brister/tensor_box_output 2025-09-07T07:34:58.1103570Z * [new branch] brister/tiled_reduction_no_numel_check -> origin/brister/tiled_reduction_no_numel_check 2025-09-07T07:34:58.1104657Z * [new branch] c57382a49 -> origin/c57382a49 2025-09-07T07:34:58.1105515Z * [new branch] ca_0431d47eaa -> origin/ca_0431d47eaa 2025-09-07T07:34:58.1106009Z * [new branch] ca_fix_0431d47eaa -> origin/ca_fix_0431d47eaa 2025-09-07T07:34:58.1108146Z * [new branch] camyll/revert-94bc900da97ad7f3c35b3b819bb53b23c74b581a-for-release-2.8 -> origin/camyll/revert-94bc900da97ad7f3c35b3b819bb53b23c74b581a-for-release-2.8 2025-09-07T07:34:58.1109018Z * [new branch] camyllh/test_setup_hooks_push -> origin/camyllh/test_setup_hooks_push 2025-09-07T07:34:58.1109717Z * [new branch] cherry-pick-149654-by-pytorch_bot_bot_ -> origin/cherry-pick-149654-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1110908Z * [new branch] cherry-pick-151939-by-pytorch_bot_bot_ -> origin/cherry-pick-151939-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1111505Z * [new branch] cherry-pick-154174-by-pytorch_bot_bot_ -> origin/cherry-pick-154174-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1111986Z * [new branch] cherry-pick-156260-by-pytorch_bot_bot_ -> origin/cherry-pick-156260-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1112650Z * [new branch] cherry-pick-157453-by-pytorch_bot_bot_ -> origin/cherry-pick-157453-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1113251Z * [new branch] cherry-pick-157513-by-pytorch_bot_bot_ -> origin/cherry-pick-157513-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1113834Z * [new branch] cherry-pick-157695-by-pytorch_bot_bot_ -> origin/cherry-pick-157695-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1114672Z * [new branch] cherry-pick-157732-by-pytorch_bot_bot_ -> origin/cherry-pick-157732-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1115139Z * [new branch] cherry-pick-158537-by-pytorch_bot_bot_ -> origin/cherry-pick-158537-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1115790Z * [new branch] cherry-pick-159969-by-pytorch_bot_bot_ -> origin/cherry-pick-159969-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1116390Z * [new branch] cherry-pick-160586-by-pytorch_bot_bot_ -> origin/cherry-pick-160586-by-pytorch_bot_bot_ 2025-09-07T07:34:58.1117081Z * [new branch] chilli/flex_vllm -> origin/chilli/flex_vllm 2025-09-07T07:34:58.1117853Z * [new branch] cleanup-inductor-benchmark-images -> origin/cleanup-inductor-benchmark-images 2025-09-07T07:34:58.1118567Z * [new branch] codex-testing -> origin/codex-testing 2025-09-07T07:34:58.1123985Z * [new branch] codex/add-helper-function-to-sizevars.py -> origin/codex/add-helper-function-to-sizevars.py 2025-09-07T07:34:58.1124635Z * [new branch] codex/add-helper-function-to-sizevars.py_2025-09-05 -> origin/codex/add-helper-function-to-sizevars.py_2025-09-05 2025-09-07T07:34:58.1125243Z * [new branch] codex/add-metadata-field-for-file-path -> origin/codex/add-metadata-field-for-file-path 2025-09-07T07:34:58.1125884Z * [new branch] codex/add-test-for-inductor-local-cache-behavior -> origin/codex/add-test-for-inductor-local-cache-behavior 2025-09-07T07:34:58.1126829Z * [new branch] codex/create-test-for-tensor-memory-leak-in-cudagraph -> origin/codex/create-test-for-tensor-memory-leak-in-cudagraph 2025-09-07T07:34:58.1127630Z * [new branch] codex/fix-issue-121219-in-pytorch -> origin/codex/fix-issue-121219-in-pytorch 2025-09-07T07:34:58.1128117Z * [new branch] codex/fix-issue-160415-in-pytorch -> origin/codex/fix-issue-160415-in-pytorch 2025-09-07T07:34:58.1128663Z * [new branch] codex/fix-noqengine-quantized-engine-support -> origin/codex/fix-noqengine-quantized-engine-support 2025-09-07T07:34:58.1129237Z * [new branch] codex/fix-pin_memory-error-handling -> origin/codex/fix-pin_memory-error-handling 2025-09-07T07:34:58.1129745Z * [new branch] codex/propose-fix-for-issue-160332 -> origin/codex/propose-fix-for-issue-160332 2025-09-07T07:34:58.1130325Z * [new branch] codex/refactor-lintrunner-config-to-use-uv-run -> origin/codex/refactor-lintrunner-config-to-use-uv-run 2025-09-07T07:34:58.1131011Z * [new branch] codex/remove-allow-untyped-defs-and-fix-type-errors -> origin/codex/remove-allow-untyped-defs-and-fix-type-errors 2025-09-07T07:34:58.1131797Z * [new branch] compile_fsdp2_disable_stream_and_event -> origin/compile_fsdp2_disable_stream_and_event 2025-09-07T07:34:58.1132889Z * [new branch] context_test -> origin/context_test 2025-09-07T07:34:58.1133379Z * [new branch] copilot/fix-157446 -> origin/copilot/fix-157446 2025-09-07T07:34:58.1133866Z * [new branch] copy_graph -> origin/copy_graph 2025-09-07T07:34:58.1134365Z * [new branch] cpio/fix_new_ami_tests -> origin/cpio/fix_new_ami_tests 2025-09-07T07:34:58.1137327Z * [new branch] csl/always_produce_xml -> origin/csl/always_produce_xml 2025-09-07T07:34:58.1137763Z * [new branch] csl/build_test_more_procs -> origin/csl/build_test_more_procs 2025-09-07T07:34:58.1138138Z * [new branch] csl/build_test_more_procs2 -> origin/csl/build_test_more_procs2 2025-09-07T07:34:58.1138540Z * [new branch] csl/disable_flaky_cpp_test -> origin/csl/disable_flaky_cpp_test 2025-09-07T07:34:58.1138903Z * [new branch] csl/disable_periodic_test -> origin/csl/disable_periodic_test 2025-09-07T07:34:58.1139309Z * [new branch] csl/exclude_rocm_viable_strict -> origin/csl/exclude_rocm_viable_strict 2025-09-07T07:34:58.1139692Z * [new branch] csl/katex -> origin/csl/katex 2025-09-07T07:34:58.1140017Z * [new branch] csl/larger_runner -> origin/csl/larger_runner 2025-09-07T07:34:58.1140658Z * [new branch] csl/lintrunner_stuff -> origin/csl/lintrunner_stuff 2025-09-07T07:34:58.1140996Z * [new branch] csl/mps_sharding -> origin/csl/mps_sharding 2025-09-07T07:34:58.1141329Z * [new branch] csl/multistage_docker -> origin/csl/multistage_docker 2025-09-07T07:34:58.1141686Z * [new branch] csl/name_link_check_job -> origin/csl/name_link_check_job 2025-09-07T07:34:58.1142034Z * [new branch] csl/no_keep_goin_rocm -> origin/csl/no_keep_goin_rocm 2025-09-07T07:34:58.1142388Z * [new branch] csl/not_600_timeout -> origin/csl/not_600_timeout 2025-09-07T07:34:58.1142737Z * [new branch] csl/revert_open -> origin/csl/revert_open 2025-09-07T07:34:58.1143055Z * [new branch] csl/skip_build -> origin/csl/skip_build 2025-09-07T07:34:58.1148481Z * [new branch] csl/test_cuda_build_large_runner -> origin/csl/test_cuda_build_large_runner 2025-09-07T07:34:58.1149093Z * [new branch] csl/win_sccache -> origin/csl/win_sccache 2025-09-07T07:34:58.1149616Z * [new branch] cublasltrelax2 -> origin/cublasltrelax2 2025-09-07T07:34:58.1150385Z * [new branch] cublasrelax2 -> origin/cublasrelax2 2025-09-07T07:34:58.1151034Z * [new branch] cudnnsdparefactor -> origin/cudnnsdparefactor 2025-09-07T07:34:58.1151406Z * [new branch] custom_lowering_dict -> origin/custom_lowering_dict 2025-09-07T07:34:58.1151741Z * [new branch] czhuge_muon_dev -> origin/czhuge_muon_dev 2025-09-07T07:34:58.1152070Z * [new branch] d4l3k/delete_hook -> origin/d4l3k/delete_hook 2025-09-07T07:34:58.1152383Z * [new branch] dcp_zoc -> origin/dcp_zoc 2025-09-07T07:34:58.1152683Z * [new branch] debug-guard -> origin/debug-guard 2025-09-07T07:34:58.1153013Z * [new branch] delete-quant-docs -> origin/delete-quant-docs 2025-09-07T07:34:58.1153826Z * [new branch] dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.55.2 -> origin/dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.55.2 2025-09-07T07:34:58.1154704Z * [new branch] dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.55.3 -> origin/dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.55.3 2025-09-07T07:34:58.1155607Z * [new branch] dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.55.4 -> origin/dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.55.4 2025-09-07T07:34:58.1156533Z * [new branch] dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.56.0 -> origin/dependabot/pip/dot-ci/docker/ci_commit_pins/main/transformers-4.56.0 2025-09-07T07:34:58.1157223Z * [new branch] dependabot/pip/dot-ci/docker/protobuf-5.29.5 -> origin/dependabot/pip/dot-ci/docker/protobuf-5.29.5 2025-09-07T07:34:58.1157825Z * [new branch] dependabot/pip/dot-github/requirements/protobuf-5.29.5 -> origin/dependabot/pip/dot-github/requirements/protobuf-5.29.5 2025-09-07T07:34:58.1158341Z * [new branch] desertfire/test_cpp_wrapper -> origin/desertfire/test_cpp_wrapper 2025-09-07T07:34:58.1158793Z * [new branch] desertfire/triton-cpu-for-aarch64 -> origin/desertfire/triton-cpu-for-aarch64 2025-09-07T07:34:58.1161724Z * [new branch] dev/joona/MPSNDArrayAdd -> origin/dev/joona/MPSNDArrayAdd 2025-09-07T07:34:58.1162164Z * [new branch] dev/joona/Unranked -> origin/dev/joona/Unranked 2025-09-07T07:34:58.1162588Z * [new branch] dev/joona/cat -> origin/dev/joona/cat 2025-09-07T07:34:58.1162986Z * [new branch] dev/joona/cat_remove_graph -> origin/dev/joona/cat_remove_graph 2025-09-07T07:34:58.1163686Z * [new branch] dev/joona/embeddingbag -> origin/dev/joona/embeddingbag 2025-09-07T07:34:58.1166186Z * [new branch] dev/joona/getTensorsString -> origin/dev/joona/getTensorsString 2025-09-07T07:34:58.1167497Z * [new branch] dev/joona/maxpool2dwithindices_errmsg -> origin/dev/joona/maxpool2dwithindices_errmsg 2025-09-07T07:34:58.1168058Z * [new branch] dev/joona/mps_linear_macos14 -> origin/dev/joona/mps_linear_macos14 2025-09-07T07:34:58.1168468Z * [new branch] dev/joona/sdpa -> origin/dev/joona/sdpa 2025-09-07T07:34:58.1168864Z * [new branch] dev/joona/topk_newapi -> origin/dev/joona/topk_newapi 2025-09-07T07:34:58.1172492Z * [new branch] dev/joona/type_inf -> origin/dev/joona/type_inf 2025-09-07T07:34:58.1173045Z * [new branch] dev/joona/upsize3d -> origin/dev/joona/upsize3d 2025-09-07T07:34:58.1173537Z * [new branch] disable -> origin/disable 2025-09-07T07:34:58.1174029Z * [new branch] e2e-baseline -> origin/e2e-baseline 2025-09-07T07:34:58.1174502Z * [new branch] eigen_for_sparse_addmm_v2 -> origin/eigen_for_sparse_addmm_v2 2025-09-07T07:34:58.1175084Z * [new branch] embg/test_inductor_ci_128B -> origin/embg/test_inductor_ci_128B 2025-09-07T07:34:58.1175941Z * [new branch] embg/test_inductor_ci_base -> origin/embg/test_inductor_ci_base 2025-09-07T07:34:58.1176343Z * [new branch] embg/test_inductor_ci_control -> origin/embg/test_inductor_ci_control 2025-09-07T07:34:58.1176735Z * [new branch] embg/triton_l2_prefetch_128B -> origin/embg/triton_l2_prefetch_128B 2025-09-07T07:34:58.1177230Z * [new branch] embg/triton_l2_prefetch_256B -> origin/embg/triton_l2_prefetch_256B 2025-09-07T07:34:58.1177595Z * [new branch] eqy-patch-1 -> origin/eqy-patch-1 2025-09-07T07:34:58.1177898Z * [new branch] eqy-patch-2 -> origin/eqy-patch-2 2025-09-07T07:34:58.1178194Z * [new branch] eqy-patch-3 -> origin/eqy-patch-3 2025-09-07T07:34:58.1178488Z * [new branch] eqy-patch-4 -> origin/eqy-patch-4 2025-09-07T07:34:58.1178862Z * [new branch] example-convert-torch.nn -> origin/example-convert-torch.nn 2025-09-07T07:34:58.1179350Z * [new branch] exclamaforte/add-contiguous-threshold -> origin/exclamaforte/add-contiguous-threshold 2025-09-07T07:34:58.1182466Z * [new branch] exclamaforte/amd-ma -> origin/exclamaforte/amd-ma 2025-09-07T07:34:58.1182925Z * [new branch] exclamaforte/bump-transformer-version -> origin/exclamaforte/bump-transformer-version 2025-09-07T07:34:58.1183423Z * [new branch] exclamaforte/clear-feedback-savers -> origin/exclamaforte/clear-feedback-savers 2025-09-07T07:34:58.1183912Z * [new branch] exclamaforte/combo-kernels-perf-run -> origin/exclamaforte/combo-kernels-perf-run 2025-09-07T07:34:58.1184389Z * [new branch] exclamaforte/do_bench_refactor -> origin/exclamaforte/do_bench_refactor 2025-09-07T07:34:58.1184815Z * [new branch] exclamaforte/enable-mem-dep-fusion -> origin/exclamaforte/enable-mem-dep-fusion 2025-09-07T07:34:58.1185322Z * [new branch] exclamaforte/fix-exhaustive-autotuning -> origin/exclamaforte/fix-exhaustive-autotuning 2025-09-07T07:34:58.1185883Z * [new branch] exclamaforte/fix-exhuastive-autotuning-reland -> origin/exclamaforte/fix-exhuastive-autotuning-reland 2025-09-07T07:34:58.1187414Z * [new branch] exclamaforte/fix-trace-parsing-fx-svg -> origin/exclamaforte/fix-trace-parsing-fx-svg 2025-09-07T07:34:58.1187950Z * [new branch] exclamaforte/force-pointwise-cat-perf-run -> origin/exclamaforte/force-pointwise-cat-perf-run 2025-09-07T07:34:58.1188409Z * [new branch] exclamaforte/fusion-data -> origin/exclamaforte/fusion-data 2025-09-07T07:34:58.1188866Z * [new branch] exclamaforte/gemm-benchmark-run -> origin/exclamaforte/gemm-benchmark-run 2025-09-07T07:34:58.1189300Z * [new branch] exclamaforte/gemm-export-model -> origin/exclamaforte/gemm-export-model 2025-09-07T07:34:58.1189714Z * [new branch] exclamaforte/gemm-model -> origin/exclamaforte/gemm-model 2025-09-07T07:34:58.1190194Z * [new branch] exclamaforte/gemm-model-all-data-collection -> origin/exclamaforte/gemm-model-all-data-collection 2025-09-07T07:34:58.1190683Z * [new branch] exclamaforte/gemm-to-amd -> origin/exclamaforte/gemm-to-amd 2025-09-07T07:34:58.1191519Z * [new branch] exclamaforte/just-gemm-model -> origin/exclamaforte/just-gemm-model 2025-09-07T07:34:58.1191985Z * [new branch] exclamaforte/just-gemm-model-no-refactor -> origin/exclamaforte/just-gemm-model-no-refactor 2025-09-07T07:34:58.1192448Z * [new branch] exclamaforte/max-autotune-ieee -> origin/exclamaforte/max-autotune-ieee 2025-09-07T07:34:58.1192853Z * [new branch] exclamaforte/memory-counter -> origin/exclamaforte/memory-counter 2025-09-07T07:34:58.1193260Z * [new branch] exclamaforte/profile-diff-algo -> origin/exclamaforte/profile-diff-algo 2025-09-07T07:34:58.1193813Z * [new branch] exclamaforte/profiler-combo -> origin/exclamaforte/profiler-combo 2025-09-07T07:34:58.1194345Z * [new branch] exclamaforte/test_cpp_wrapper_mode -> origin/exclamaforte/test_cpp_wrapper_mode 2025-09-07T07:34:58.1194820Z * [new branch] exclamaforte/update-autotune-configs -> origin/exclamaforte/update-autotune-configs 2025-09-07T07:34:58.1197464Z * [new branch] exclamaforte/update-autotune-configs-2 -> origin/exclamaforte/update-autotune-configs-2 2025-09-07T07:34:58.1197964Z * [new branch] exclamforte/gemm-model-final -> origin/exclamforte/gemm-model-final 2025-09-07T07:34:58.1198330Z * [new branch] exec -> origin/exec 2025-09-07T07:34:58.1198678Z * [new branch] executorch-module-shim -> origin/executorch-module-shim 2025-09-07T07:34:58.1199057Z * [new branch] experimental-mosaic -> origin/experimental-mosaic 2025-09-07T07:34:58.1199420Z * [new branch] export-D58091437 -> origin/export-D58091437 2025-09-07T07:34:58.1199765Z * [new branch] export-D61047529 -> origin/export-D61047529 2025-09-07T07:34:58.1201623Z * [new branch] export-D70112642 -> origin/export-D70112642 2025-09-07T07:34:58.1201961Z * [new branch] export-D71412006 -> origin/export-D71412006 2025-09-07T07:34:58.1202294Z * [new branch] export-D73042989 -> origin/export-D73042989 2025-09-07T07:34:58.1202620Z * [new branch] export-D75183591 -> origin/export-D75183591 2025-09-07T07:34:58.1202949Z * [new branch] export-D75617432 -> origin/export-D75617432 2025-09-07T07:34:58.1203273Z * [new branch] export-D75659965 -> origin/export-D75659965 2025-09-07T07:34:58.1203607Z * [new branch] export-D76080931 -> origin/export-D76080931 2025-09-07T07:34:58.1203937Z * [new branch] export-D76797250 -> origin/export-D76797250 2025-09-07T07:34:58.1204564Z * [new branch] export-D76885271 -> origin/export-D76885271 2025-09-07T07:34:58.1205197Z * [new branch] export-D76885620 -> origin/export-D76885620 2025-09-07T07:34:58.1205990Z * [new branch] export-D76936623 -> origin/export-D76936623 2025-09-07T07:34:58.1206839Z * [new branch] export-D76958268 -> origin/export-D76958268 2025-09-07T07:34:58.1207556Z * [new branch] export-D78375400 -> origin/export-D78375400 2025-09-07T07:34:58.1208310Z * [new branch] export-D78431305 -> origin/export-D78431305 2025-09-07T07:34:58.1209018Z * [new branch] export-D78580107 -> origin/export-D78580107 2025-09-07T07:34:58.1210055Z * [new branch] export-D78822171 -> origin/export-D78822171 2025-09-07T07:34:58.1210712Z * [new branch] export-D78822351 -> origin/export-D78822351 2025-09-07T07:34:58.1211219Z * [new branch] export-D78822507 -> origin/export-D78822507 2025-09-07T07:34:58.1216317Z * [new branch] export-D78826994 -> origin/export-D78826994 2025-09-07T07:34:58.1216678Z * [new branch] export-D78894324 -> origin/export-D78894324 2025-09-07T07:34:58.1217022Z * [new branch] export-D78929245 -> origin/export-D78929245 2025-09-07T07:34:58.1217376Z * [new branch] export-D78934925 -> origin/export-D78934925 2025-09-07T07:34:58.1217737Z * [new branch] export-D78953203 -> origin/export-D78953203 2025-09-07T07:34:58.1218082Z * [new branch] export-D78953229 -> origin/export-D78953229 2025-09-07T07:34:58.1218430Z * [new branch] export-D78957093 -> origin/export-D78957093 2025-09-07T07:34:58.1218780Z * [new branch] export-D78957389 -> origin/export-D78957389 2025-09-07T07:34:58.1219306Z * [new branch] export-D78996107 -> origin/export-D78996107 2025-09-07T07:34:58.1219667Z * [new branch] export-D79026433 -> origin/export-D79026433 2025-09-07T07:34:58.1220029Z * [new branch] export-D79230339 -> origin/export-D79230339 2025-09-07T07:34:58.1223003Z * [new branch] export-D79319835 -> origin/export-D79319835 2025-09-07T07:34:58.1223358Z * [new branch] export-D79328456 -> origin/export-D79328456 2025-09-07T07:34:58.1223704Z * [new branch] export-D79534608 -> origin/export-D79534608 2025-09-07T07:34:58.1224021Z * [new branch] export-D79785974 -> origin/export-D79785974 2025-09-07T07:34:58.1224350Z * [new branch] export-D80025417 -> origin/export-D80025417 2025-09-07T07:34:58.1224675Z * [new branch] export-D80120333 -> origin/export-D80120333 2025-09-07T07:34:58.1224999Z * [new branch] export-D80214882 -> origin/export-D80214882 2025-09-07T07:34:58.1225325Z * [new branch] export-D80319069 -> origin/export-D80319069 2025-09-07T07:34:58.1227152Z * [new branch] export-D80321215 -> origin/export-D80321215 2025-09-07T07:34:58.1227494Z * [new branch] export-D80503451 -> origin/export-D80503451 2025-09-07T07:34:58.1227815Z * [new branch] export-D80771648 -> origin/export-D80771648 2025-09-07T07:34:58.1228158Z * [new branch] export-D80823877 -> origin/export-D80823877 2025-09-07T07:34:58.1228496Z * [new branch] export-D80948073 -> origin/export-D80948073 2025-09-07T07:34:58.1228837Z * [new branch] export-D80958642 -> origin/export-D80958642 2025-09-07T07:34:58.1229167Z * [new branch] export-D80970483 -> origin/export-D80970483 2025-09-07T07:34:58.1229510Z * [new branch] export-D81054193 -> origin/export-D81054193 2025-09-07T07:34:58.1229836Z * [new branch] export-D81060182 -> origin/export-D81060182 2025-09-07T07:34:58.1232272Z * [new branch] export-D81078973 -> origin/export-D81078973 2025-09-07T07:34:58.1232580Z * [new branch] export-D81204584 -> origin/export-D81204584 2025-09-07T07:34:58.1232888Z * [new branch] export-D81284190 -> origin/export-D81284190 2025-09-07T07:34:58.1233267Z * [new branch] export-D81299840 -> origin/export-D81299840 2025-09-07T07:34:58.1233590Z * [new branch] export-D81429090 -> origin/export-D81429090 2025-09-07T07:34:58.1233898Z * [new branch] export-D81698719 -> origin/export-D81698719 2025-09-07T07:34:58.1234202Z * [new branch] export-D81747409 -> origin/export-D81747409 2025-09-07T07:34:58.1237927Z * [new branch] exported-model-train-idempotent -> origin/exported-model-train-idempotent 2025-09-07T07:34:58.1238378Z * [new branch] ezyang/wip-aot-descriptors -> origin/ezyang/wip-aot-descriptors 2025-09-07T07:34:58.1238729Z * [new branch] fa_u8_brgemm -> origin/fa_u8_brgemm 2025-09-07T07:34:58.1239050Z * [new branch] fastmath_baseline -> origin/fastmath_baseline 2025-09-07T07:34:58.1239356Z * [new branch] fbcode/warm -> origin/fbcode/warm 2025-09-07T07:34:58.1239654Z * [new branch] fca -> origin/fca 2025-09-07T07:34:58.1239946Z * [new branch] fca2_ca5984c -> origin/fca2_ca5984c 2025-09-07T07:34:58.1242948Z * [new branch] fca5 -> origin/fca5 2025-09-07T07:34:58.1243383Z * [new branch] feature/function-numa-binding -> origin/feature/function-numa-binding 2025-09-07T07:34:58.1243927Z * [new branch] feature/function-numa-binding-take2 -> origin/feature/function-numa-binding-take2 2025-09-07T07:34:58.1244376Z * [new branch] feature/numa-nproc-fix -> origin/feature/numa-nproc-fix 2025-09-07T07:34:58.1244797Z * [new branch] feature/numa-signpost-serialize -> origin/feature/numa-signpost-serialize 2025-09-07T07:34:58.1245393Z * [new branch] feature/parallel-numa-binding -> origin/feature/parallel-numa-binding 2025-09-07T07:34:58.1245830Z * [new branch] fengyuan/external-proj -> origin/fengyuan/external-proj 2025-09-07T07:34:58.1246340Z * [new branch] fengyuan/out-of-tree-xpu-ops-improve-test -> origin/fengyuan/out-of-tree-xpu-ops-improve-test 2025-09-07T07:34:58.1246986Z * [new branch] fengyuan/out-of-tree-xpu-ops-remove-dtype -> origin/fengyuan/out-of-tree-xpu-ops-remove-dtype 2025-09-07T07:34:58.1247469Z * [new branch] fengyuan/test-xpu -> origin/fengyuan/test-xpu 2025-09-07T07:34:58.1247851Z * [new branch] ffast_math_baseline -> origin/ffast_math_baseline 2025-09-07T07:34:58.1251859Z * [new branch] ffast_math_target -> origin/ffast_math_target 2025-09-07T07:34:58.1252221Z * [new branch] findhao/base_commit -> origin/findhao/base_commit 2025-09-07T07:34:58.1252583Z * [new branch] findhao/base_commit1 -> origin/findhao/base_commit1 2025-09-07T07:34:58.1252943Z * [new branch] findhao/multistream2 -> origin/findhao/multistream2 2025-09-07T07:34:58.1253313Z * [new branch] findhao/multistream5 -> origin/findhao/multistream5 2025-09-07T07:34:58.1253674Z * [new branch] findhao/multistream6 -> origin/findhao/multistream6 2025-09-07T07:34:58.1254047Z * [new branch] findhao/operatorbench3 -> origin/findhao/operatorbench3 2025-09-07T07:34:58.1254421Z * [new branch] findhao/operatorbench5 -> origin/findhao/operatorbench5 2025-09-07T07:34:58.1256560Z * [new branch] findhao/tritonparse -> origin/findhao/tritonparse 2025-09-07T07:34:58.1257030Z * [new branch] fix -> origin/fix 2025-09-07T07:34:58.1257504Z * [new branch] fix-ck-gemm-template-format -> origin/fix-ck-gemm-template-format 2025-09-07T07:34:58.1258032Z * [new branch] fix-config-ignore -> origin/fix-config-ignore 2025-09-07T07:34:58.1258828Z * [new branch] fix-dict-guard -> origin/fix-dict-guard 2025-09-07T07:34:58.1259555Z * [new branch] fix-inductor-periodic-0528 -> origin/fix-inductor-periodic-0528 2025-09-07T07:34:58.1259967Z * [new branch] fix-mps-benchmark -> origin/fix-mps-benchmark 2025-09-07T07:34:58.1260452Z * [new branch] fix-rlease-feature-template -> origin/fix-rlease-feature-template 2025-09-07T07:34:58.1262212Z * [new branch] fix-run-condition-upload-results -> origin/fix-run-condition-upload-results 2025-09-07T07:34:58.1262644Z * [new branch] fix-torchbench -> origin/fix-torchbench 2025-09-07T07:34:58.1262978Z * [new branch] fix_153389 -> origin/fix_153389 2025-09-07T07:34:58.1263315Z * [new branch] fix_fsdp_rs_bucket2 -> origin/fix_fsdp_rs_bucket2 2025-09-07T07:34:58.1263819Z * [new branch] fix_inductor_peridic_tests -> origin/fix_inductor_peridic_tests 2025-09-07T07:34:58.1264684Z * [new branch] fix_ubn_159469 -> origin/fix_ubn_159469 2025-09-07T07:34:58.1265100Z * [new branch] fixes-triage -> origin/fixes-triage 2025-09-07T07:34:58.1265466Z * [new branch] fixflashinfer -> origin/fixflashinfer 2025-09-07T07:34:58.1265835Z * [new branch] flash_decoding_cpu -> origin/flash_decoding_cpu 2025-09-07T07:34:58.1267183Z * [new branch] flex-flash -> origin/flex-flash 2025-09-07T07:34:58.1267712Z * [new branch] flex-lowering -> origin/flex-lowering 2025-09-07T07:34:58.1268059Z * [new branch] flex-warning -> origin/flex-warning 2025-09-07T07:34:58.1268595Z * [new branch] flex_attention_functorch_grad -> origin/flex_attention_functorch_grad 2025-09-07T07:34:58.1269089Z * [new branch] flex_flash -> origin/flex_flash 2025-09-07T07:34:58.1269674Z * [new branch] flexdecode-gqa-groups -> origin/flexdecode-gqa-groups 2025-09-07T07:34:58.1270252Z * [new branch] fmassa/fix_memeff_sharding_rule -> origin/fmassa/fix_memeff_sharding_rule 2025-09-07T07:34:58.1270819Z * [new branch] fsdp2_trace_rules -> origin/fsdp2_trace_rules 2025-09-07T07:34:58.1271694Z * [new branch] fsdpv2_3d -> origin/fsdpv2_3d 2025-09-07T07:34:58.1272116Z * [new branch] fsdpv2_3d_m1 -> origin/fsdpv2_3d_m1 2025-09-07T07:34:58.1272457Z * [new branch] fx_cpp -> origin/fx_cpp 2025-09-07T07:34:58.1272783Z * [new branch] fy/fix-win -> origin/fy/fix-win 2025-09-07T07:34:58.1277136Z * [new branch] gh/AlnisM/1/base -> origin/gh/AlnisM/1/base 2025-09-07T07:34:58.1277566Z * [new branch] gh/AlnisM/1/head -> origin/gh/AlnisM/1/head 2025-09-07T07:34:58.1277929Z * [new branch] gh/CaoE/2/base -> origin/gh/CaoE/2/base 2025-09-07T07:34:58.1278286Z * [new branch] gh/CaoE/2/head -> origin/gh/CaoE/2/head 2025-09-07T07:34:58.1278614Z * [new branch] gh/CaoE/2/orig -> origin/gh/CaoE/2/orig 2025-09-07T07:34:58.1279597Z * [new branch] gh/ColinPeppler/79/base -> origin/gh/ColinPeppler/79/base 2025-09-07T07:34:58.1280107Z * [new branch] gh/ColinPeppler/79/head -> origin/gh/ColinPeppler/79/head 2025-09-07T07:34:58.1280902Z * [new branch] gh/ColinPeppler/79/orig -> origin/gh/ColinPeppler/79/orig 2025-09-07T07:34:58.1281414Z * [new branch] gh/ColinPeppler/80/base -> origin/gh/ColinPeppler/80/base 2025-09-07T07:34:58.1282142Z * [new branch] gh/ColinPeppler/80/head -> origin/gh/ColinPeppler/80/head 2025-09-07T07:34:58.1282738Z * [new branch] gh/ColinPeppler/80/orig -> origin/gh/ColinPeppler/80/orig 2025-09-07T07:34:58.1284298Z * [new branch] gh/EikanWang/67/base -> origin/gh/EikanWang/67/base 2025-09-07T07:34:58.1284856Z * [new branch] gh/EikanWang/67/head -> origin/gh/EikanWang/67/head 2025-09-07T07:34:58.1285885Z * [new branch] gh/EikanWang/80/base -> origin/gh/EikanWang/80/base 2025-09-07T07:34:58.1286516Z * [new branch] gh/EikanWang/80/head -> origin/gh/EikanWang/80/head 2025-09-07T07:34:58.1287383Z * [new branch] gh/EikanWang/80/orig -> origin/gh/EikanWang/80/orig 2025-09-07T07:34:58.1292726Z * [new branch] gh/EikanWang/81/base -> origin/gh/EikanWang/81/base 2025-09-07T07:34:58.1293176Z * [new branch] gh/EikanWang/81/head -> origin/gh/EikanWang/81/head 2025-09-07T07:34:58.1293574Z * [new branch] gh/EikanWang/81/orig -> origin/gh/EikanWang/81/orig 2025-09-07T07:34:58.1293950Z * [new branch] gh/EikanWang/82/base -> origin/gh/EikanWang/82/base 2025-09-07T07:34:58.1294331Z * [new branch] gh/EikanWang/82/head -> origin/gh/EikanWang/82/head 2025-09-07T07:34:58.1294692Z * [new branch] gh/EikanWang/82/orig -> origin/gh/EikanWang/82/orig 2025-09-07T07:34:58.1295073Z * [new branch] gh/Gasoonjia/1/base -> origin/gh/Gasoonjia/1/base 2025-09-07T07:34:58.1295452Z * [new branch] gh/Gasoonjia/1/head -> origin/gh/Gasoonjia/1/head 2025-09-07T07:34:58.1296007Z * [new branch] gh/H-Huang/131/base -> origin/gh/H-Huang/131/base 2025-09-07T07:34:58.1296379Z * [new branch] gh/H-Huang/131/head -> origin/gh/H-Huang/131/head 2025-09-07T07:34:58.1296785Z * [new branch] gh/H-Huang/131/orig -> origin/gh/H-Huang/131/orig 2025-09-07T07:34:58.1297654Z * [new branch] gh/H-Huang/132/base -> origin/gh/H-Huang/132/base 2025-09-07T07:34:58.1298884Z * [new branch] gh/H-Huang/132/head -> origin/gh/H-Huang/132/head 2025-09-07T07:34:58.1299231Z * [new branch] gh/H-Huang/132/orig -> origin/gh/H-Huang/132/orig 2025-09-07T07:34:58.1299809Z * [new branch] gh/H-Huang/180/base -> origin/gh/H-Huang/180/base 2025-09-07T07:34:58.1300444Z * [new branch] gh/H-Huang/180/head -> origin/gh/H-Huang/180/head 2025-09-07T07:34:58.1301149Z * [new branch] gh/H-Huang/180/orig -> origin/gh/H-Huang/180/orig 2025-09-07T07:34:58.1305753Z * [new branch] gh/H-Huang/182/base -> origin/gh/H-Huang/182/base 2025-09-07T07:34:58.1306326Z * [new branch] gh/H-Huang/182/head -> origin/gh/H-Huang/182/head 2025-09-07T07:34:58.1306692Z * [new branch] gh/H-Huang/182/orig -> origin/gh/H-Huang/182/orig 2025-09-07T07:34:58.1307157Z * [new branch] gh/H-Huang/187/base -> origin/gh/H-Huang/187/base 2025-09-07T07:34:58.1307970Z * [new branch] gh/H-Huang/187/head -> origin/gh/H-Huang/187/head 2025-09-07T07:34:58.1308439Z * [new branch] gh/H-Huang/187/orig -> origin/gh/H-Huang/187/orig 2025-09-07T07:34:58.1308817Z * [new branch] gh/H-Huang/202/base -> origin/gh/H-Huang/202/base 2025-09-07T07:34:58.1309189Z * [new branch] gh/H-Huang/202/head -> origin/gh/H-Huang/202/head 2025-09-07T07:34:58.1309559Z * [new branch] gh/H-Huang/202/orig -> origin/gh/H-Huang/202/orig 2025-09-07T07:34:58.1315239Z * [new branch] gh/H-Huang/203/base -> origin/gh/H-Huang/203/base 2025-09-07T07:34:58.1315662Z * [new branch] gh/H-Huang/203/head -> origin/gh/H-Huang/203/head 2025-09-07T07:34:58.1316018Z * [new branch] gh/H-Huang/203/orig -> origin/gh/H-Huang/203/orig 2025-09-07T07:34:58.1316372Z * [new branch] gh/H-Huang/204/base -> origin/gh/H-Huang/204/base 2025-09-07T07:34:58.1316719Z * [new branch] gh/H-Huang/204/head -> origin/gh/H-Huang/204/head 2025-09-07T07:34:58.1317223Z * [new branch] gh/H-Huang/204/orig -> origin/gh/H-Huang/204/orig 2025-09-07T07:34:58.1317576Z * [new branch] gh/H-Huang/205/base -> origin/gh/H-Huang/205/base 2025-09-07T07:34:58.1317983Z * [new branch] gh/H-Huang/205/head -> origin/gh/H-Huang/205/head 2025-09-07T07:34:58.1318331Z * [new branch] gh/H-Huang/205/orig -> origin/gh/H-Huang/205/orig 2025-09-07T07:34:58.1318878Z * [new branch] gh/H-Huang/206/base -> origin/gh/H-Huang/206/base 2025-09-07T07:34:58.1319232Z * [new branch] gh/H-Huang/206/head -> origin/gh/H-Huang/206/head 2025-09-07T07:34:58.1319584Z * [new branch] gh/H-Huang/206/orig -> origin/gh/H-Huang/206/orig 2025-09-07T07:34:58.1319929Z * [new branch] gh/H-Huang/207/base -> origin/gh/H-Huang/207/base 2025-09-07T07:34:58.1320278Z * [new branch] gh/H-Huang/207/head -> origin/gh/H-Huang/207/head 2025-09-07T07:34:58.1320629Z * [new branch] gh/H-Huang/207/orig -> origin/gh/H-Huang/207/orig 2025-09-07T07:34:58.1320973Z * [new branch] gh/H-Huang/208/base -> origin/gh/H-Huang/208/base 2025-09-07T07:34:58.1321323Z * [new branch] gh/H-Huang/208/head -> origin/gh/H-Huang/208/head 2025-09-07T07:34:58.1324272Z * [new branch] gh/H-Huang/208/orig -> origin/gh/H-Huang/208/orig 2025-09-07T07:34:58.1324637Z * [new branch] gh/H-Huang/209/base -> origin/gh/H-Huang/209/base 2025-09-07T07:34:58.1324992Z * [new branch] gh/H-Huang/209/head -> origin/gh/H-Huang/209/head 2025-09-07T07:34:58.1325340Z * [new branch] gh/H-Huang/209/orig -> origin/gh/H-Huang/209/orig 2025-09-07T07:34:58.1325683Z * [new branch] gh/H-Huang/210/base -> origin/gh/H-Huang/210/base 2025-09-07T07:34:58.1326042Z * [new branch] gh/H-Huang/210/head -> origin/gh/H-Huang/210/head 2025-09-07T07:34:58.1326403Z * [new branch] gh/H-Huang/210/orig -> origin/gh/H-Huang/210/orig 2025-09-07T07:34:58.1327060Z * [new branch] gh/H-Huang/211/base -> origin/gh/H-Huang/211/base 2025-09-07T07:34:58.1327471Z * [new branch] gh/H-Huang/211/head -> origin/gh/H-Huang/211/head 2025-09-07T07:34:58.1327819Z * [new branch] gh/H-Huang/211/orig -> origin/gh/H-Huang/211/orig 2025-09-07T07:34:58.1332031Z * [new branch] gh/H-Huang/212/base -> origin/gh/H-Huang/212/base 2025-09-07T07:34:58.1332386Z * [new branch] gh/H-Huang/212/head -> origin/gh/H-Huang/212/head 2025-09-07T07:34:58.1332735Z * [new branch] gh/H-Huang/212/orig -> origin/gh/H-Huang/212/orig 2025-09-07T07:34:58.1333084Z * [new branch] gh/H-Huang/213/base -> origin/gh/H-Huang/213/base 2025-09-07T07:34:58.1333435Z * [new branch] gh/H-Huang/213/head -> origin/gh/H-Huang/213/head 2025-09-07T07:34:58.1333799Z * [new branch] gh/H-Huang/213/orig -> origin/gh/H-Huang/213/orig 2025-09-07T07:34:58.1334151Z * [new branch] gh/H-Huang/214/base -> origin/gh/H-Huang/214/base 2025-09-07T07:34:58.1334492Z * [new branch] gh/H-Huang/214/head -> origin/gh/H-Huang/214/head 2025-09-07T07:34:58.1337550Z * [new branch] gh/H-Huang/214/orig -> origin/gh/H-Huang/214/orig 2025-09-07T07:34:58.1337920Z * [new branch] gh/IvanKobzarev/112/base -> origin/gh/IvanKobzarev/112/base 2025-09-07T07:34:58.1338305Z * [new branch] gh/IvanKobzarev/112/head -> origin/gh/IvanKobzarev/112/head 2025-09-07T07:34:58.1338679Z * [new branch] gh/IvanKobzarev/112/orig -> origin/gh/IvanKobzarev/112/orig 2025-09-07T07:34:58.1339089Z * [new branch] gh/IvanKobzarev/115/base -> origin/gh/IvanKobzarev/115/base 2025-09-07T07:34:58.1339514Z * [new branch] gh/IvanKobzarev/115/head -> origin/gh/IvanKobzarev/115/head 2025-09-07T07:34:58.1339892Z * [new branch] gh/IvanKobzarev/115/orig -> origin/gh/IvanKobzarev/115/orig 2025-09-07T07:34:58.1343266Z * [new branch] gh/IvanKobzarev/116/base -> origin/gh/IvanKobzarev/116/base 2025-09-07T07:34:58.1343649Z * [new branch] gh/IvanKobzarev/116/head -> origin/gh/IvanKobzarev/116/head 2025-09-07T07:34:58.1344026Z * [new branch] gh/IvanKobzarev/116/orig -> origin/gh/IvanKobzarev/116/orig 2025-09-07T07:34:58.1344394Z * [new branch] gh/IvanKobzarev/118/base -> origin/gh/IvanKobzarev/118/base 2025-09-07T07:34:58.1344763Z * [new branch] gh/IvanKobzarev/118/head -> origin/gh/IvanKobzarev/118/head 2025-09-07T07:34:58.1345328Z * [new branch] gh/IvanKobzarev/118/orig -> origin/gh/IvanKobzarev/118/orig 2025-09-07T07:34:58.1345691Z * [new branch] gh/IvanKobzarev/126/base -> origin/gh/IvanKobzarev/126/base 2025-09-07T07:34:58.1346041Z * [new branch] gh/IvanKobzarev/126/head -> origin/gh/IvanKobzarev/126/head 2025-09-07T07:34:58.1350878Z * [new branch] gh/IvanKobzarev/126/orig -> origin/gh/IvanKobzarev/126/orig 2025-09-07T07:34:58.1351451Z * [new branch] gh/IvanKobzarev/127/base -> origin/gh/IvanKobzarev/127/base 2025-09-07T07:34:58.1352064Z * [new branch] gh/IvanKobzarev/127/head -> origin/gh/IvanKobzarev/127/head 2025-09-07T07:34:58.1352476Z * [new branch] gh/IvanKobzarev/127/orig -> origin/gh/IvanKobzarev/127/orig 2025-09-07T07:34:58.1352857Z * [new branch] gh/IvanKobzarev/128/base -> origin/gh/IvanKobzarev/128/base 2025-09-07T07:34:58.1353254Z * [new branch] gh/IvanKobzarev/128/head -> origin/gh/IvanKobzarev/128/head 2025-09-07T07:34:58.1353676Z * [new branch] gh/IvanKobzarev/128/orig -> origin/gh/IvanKobzarev/128/orig 2025-09-07T07:34:58.1354074Z * [new branch] gh/IvanKobzarev/132/base -> origin/gh/IvanKobzarev/132/base 2025-09-07T07:34:58.1354471Z * [new branch] gh/IvanKobzarev/132/head -> origin/gh/IvanKobzarev/132/head 2025-09-07T07:34:58.1355364Z * [new branch] gh/IvanKobzarev/132/orig -> origin/gh/IvanKobzarev/132/orig 2025-09-07T07:34:58.1355867Z * [new branch] gh/IvanKobzarev/133/base -> origin/gh/IvanKobzarev/133/base 2025-09-07T07:34:58.1356272Z * [new branch] gh/IvanKobzarev/133/head -> origin/gh/IvanKobzarev/133/head 2025-09-07T07:34:58.1356635Z * [new branch] gh/IvanKobzarev/133/orig -> origin/gh/IvanKobzarev/133/orig 2025-09-07T07:34:58.1357004Z * [new branch] gh/IvanKobzarev/134/base -> origin/gh/IvanKobzarev/134/base 2025-09-07T07:34:58.1357365Z * [new branch] gh/IvanKobzarev/134/head -> origin/gh/IvanKobzarev/134/head 2025-09-07T07:34:58.1363200Z * [new branch] gh/IvanKobzarev/134/orig -> origin/gh/IvanKobzarev/134/orig 2025-09-07T07:34:58.1363656Z * [new branch] gh/IvanKobzarev/135/base -> origin/gh/IvanKobzarev/135/base 2025-09-07T07:34:58.1364035Z * [new branch] gh/IvanKobzarev/135/head -> origin/gh/IvanKobzarev/135/head 2025-09-07T07:34:58.1364412Z * [new branch] gh/IvanKobzarev/135/orig -> origin/gh/IvanKobzarev/135/orig 2025-09-07T07:34:58.1364810Z * [new branch] gh/IvanKobzarev/136/base -> origin/gh/IvanKobzarev/136/base 2025-09-07T07:34:58.1365183Z * [new branch] gh/IvanKobzarev/136/head -> origin/gh/IvanKobzarev/136/head 2025-09-07T07:34:58.1365565Z * [new branch] gh/IvanKobzarev/136/orig -> origin/gh/IvanKobzarev/136/orig 2025-09-07T07:34:58.1365935Z * [new branch] gh/IvanKobzarev/137/base -> origin/gh/IvanKobzarev/137/base 2025-09-07T07:34:58.1366310Z * [new branch] gh/IvanKobzarev/137/head -> origin/gh/IvanKobzarev/137/head 2025-09-07T07:34:58.1367239Z * [new branch] gh/IvanKobzarev/137/orig -> origin/gh/IvanKobzarev/137/orig 2025-09-07T07:34:58.1367624Z * [new branch] gh/IvanKobzarev/138/base -> origin/gh/IvanKobzarev/138/base 2025-09-07T07:34:58.1368008Z * [new branch] gh/IvanKobzarev/138/head -> origin/gh/IvanKobzarev/138/head 2025-09-07T07:34:58.1368397Z * [new branch] gh/IvanKobzarev/138/orig -> origin/gh/IvanKobzarev/138/orig 2025-09-07T07:34:58.1368772Z * [new branch] gh/IvanKobzarev/139/base -> origin/gh/IvanKobzarev/139/base 2025-09-07T07:34:58.1369151Z * [new branch] gh/IvanKobzarev/139/head -> origin/gh/IvanKobzarev/139/head 2025-09-07T07:34:58.1374620Z * [new branch] gh/IvanKobzarev/139/orig -> origin/gh/IvanKobzarev/139/orig 2025-09-07T07:34:58.1375223Z * [new branch] gh/IvanKobzarev/140/base -> origin/gh/IvanKobzarev/140/base 2025-09-07T07:34:58.1375759Z * [new branch] gh/IvanKobzarev/140/head -> origin/gh/IvanKobzarev/140/head 2025-09-07T07:34:58.1376590Z * [new branch] gh/IvanKobzarev/140/orig -> origin/gh/IvanKobzarev/140/orig 2025-09-07T07:34:58.1377020Z * [new branch] gh/IvanKobzarev/141/base -> origin/gh/IvanKobzarev/141/base 2025-09-07T07:34:58.1377391Z * [new branch] gh/IvanKobzarev/141/head -> origin/gh/IvanKobzarev/141/head 2025-09-07T07:34:58.1377908Z * [new branch] gh/IvanKobzarev/141/orig -> origin/gh/IvanKobzarev/141/orig 2025-09-07T07:34:58.1378269Z * [new branch] gh/IvanKobzarev/142/base -> origin/gh/IvanKobzarev/142/base 2025-09-07T07:34:58.1378626Z * [new branch] gh/IvanKobzarev/142/head -> origin/gh/IvanKobzarev/142/head 2025-09-07T07:34:58.1378979Z * [new branch] gh/IvanKobzarev/142/orig -> origin/gh/IvanKobzarev/142/orig 2025-09-07T07:34:58.1379328Z * [new branch] gh/IvanKobzarev/143/base -> origin/gh/IvanKobzarev/143/base 2025-09-07T07:34:58.1379914Z * [new branch] gh/IvanKobzarev/143/head -> origin/gh/IvanKobzarev/143/head 2025-09-07T07:34:58.1380406Z * [new branch] gh/IvanKobzarev/143/orig -> origin/gh/IvanKobzarev/143/orig 2025-09-07T07:34:58.1386432Z * [new branch] gh/IvanKobzarev/144/base -> origin/gh/IvanKobzarev/144/base 2025-09-07T07:34:58.1391474Z * [new branch] gh/IvanKobzarev/144/head -> origin/gh/IvanKobzarev/144/head 2025-09-07T07:34:58.1396961Z * [new branch] gh/IvanKobzarev/144/orig -> origin/gh/IvanKobzarev/144/orig 2025-09-07T07:34:58.1401337Z * [new branch] gh/IvanKobzarev/145/base -> origin/gh/IvanKobzarev/145/base 2025-09-07T07:34:58.1401778Z * [new branch] gh/IvanKobzarev/145/head -> origin/gh/IvanKobzarev/145/head 2025-09-07T07:34:58.1402149Z * [new branch] gh/IvanKobzarev/145/orig -> origin/gh/IvanKobzarev/145/orig 2025-09-07T07:34:58.1402534Z * [new branch] gh/IvanKobzarev/146/base -> origin/gh/IvanKobzarev/146/base 2025-09-07T07:34:58.1402914Z * [new branch] gh/IvanKobzarev/146/head -> origin/gh/IvanKobzarev/146/head 2025-09-07T07:34:58.1403278Z * [new branch] gh/IvanKobzarev/146/orig -> origin/gh/IvanKobzarev/146/orig 2025-09-07T07:34:58.1403661Z * [new branch] gh/NikhilAPatel/1/base -> origin/gh/NikhilAPatel/1/base 2025-09-07T07:34:58.1404027Z * [new branch] gh/NikhilAPatel/1/head -> origin/gh/NikhilAPatel/1/head 2025-09-07T07:34:58.1404384Z * [new branch] gh/NikhilAPatel/2/base -> origin/gh/NikhilAPatel/2/base 2025-09-07T07:34:58.1404737Z * [new branch] gh/NikhilAPatel/2/head -> origin/gh/NikhilAPatel/2/head 2025-09-07T07:34:58.1405100Z * [new branch] gh/NikhilAPatel/4/base -> origin/gh/NikhilAPatel/4/base 2025-09-07T07:34:58.1405489Z * [new branch] gh/NikhilAPatel/4/head -> origin/gh/NikhilAPatel/4/head 2025-09-07T07:34:58.1406022Z * [new branch] gh/PaliC/1/base -> origin/gh/PaliC/1/base 2025-09-07T07:34:58.1406366Z * [new branch] gh/PaliC/1/head -> origin/gh/PaliC/1/head 2025-09-07T07:34:58.1406894Z * [new branch] gh/PaliC/1/orig -> origin/gh/PaliC/1/orig 2025-09-07T07:34:58.1407259Z * [new branch] gh/PaliC/17/base -> origin/gh/PaliC/17/base 2025-09-07T07:34:58.1407594Z * [new branch] gh/PaliC/17/head -> origin/gh/PaliC/17/head 2025-09-07T07:34:58.1407935Z * [new branch] gh/PaliC/17/orig -> origin/gh/PaliC/17/orig 2025-09-07T07:34:58.1408269Z * [new branch] gh/PaliC/18/base -> origin/gh/PaliC/18/base 2025-09-07T07:34:58.1408581Z * [new branch] gh/PaliC/18/head -> origin/gh/PaliC/18/head 2025-09-07T07:34:58.1408892Z * [new branch] gh/PaliC/18/orig -> origin/gh/PaliC/18/orig 2025-09-07T07:34:58.1409201Z * [new branch] gh/PaliC/2/base -> origin/gh/PaliC/2/base 2025-09-07T07:34:58.1409516Z * [new branch] gh/PaliC/2/head -> origin/gh/PaliC/2/head 2025-09-07T07:34:58.1409827Z * [new branch] gh/PaliC/2/orig -> origin/gh/PaliC/2/orig 2025-09-07T07:34:58.1410138Z * [new branch] gh/PaliC/20/base -> origin/gh/PaliC/20/base 2025-09-07T07:34:58.1410518Z * [new branch] gh/PaliC/20/head -> origin/gh/PaliC/20/head 2025-09-07T07:34:58.1410832Z * [new branch] gh/PaliC/20/orig -> origin/gh/PaliC/20/orig 2025-09-07T07:34:58.1411149Z * [new branch] gh/PaliC/21/base -> origin/gh/PaliC/21/base 2025-09-07T07:34:58.1411464Z * [new branch] gh/PaliC/21/head -> origin/gh/PaliC/21/head 2025-09-07T07:34:58.1411782Z * [new branch] gh/PaliC/21/orig -> origin/gh/PaliC/21/orig 2025-09-07T07:34:58.1412110Z * [new branch] gh/PaliC/22/base -> origin/gh/PaliC/22/base 2025-09-07T07:34:58.1412426Z * [new branch] gh/PaliC/22/head -> origin/gh/PaliC/22/head 2025-09-07T07:34:58.1412766Z * [new branch] gh/PaliC/22/orig -> origin/gh/PaliC/22/orig 2025-09-07T07:34:58.1413088Z * [new branch] gh/PaliC/23/base -> origin/gh/PaliC/23/base 2025-09-07T07:34:58.1413408Z * [new branch] gh/PaliC/23/head -> origin/gh/PaliC/23/head 2025-09-07T07:34:58.1413729Z * [new branch] gh/PaliC/23/orig -> origin/gh/PaliC/23/orig 2025-09-07T07:34:58.1414040Z * [new branch] gh/PaliC/24/base -> origin/gh/PaliC/24/base 2025-09-07T07:34:58.1414360Z * [new branch] gh/PaliC/24/head -> origin/gh/PaliC/24/head 2025-09-07T07:34:58.1414680Z * [new branch] gh/PaliC/24/orig -> origin/gh/PaliC/24/orig 2025-09-07T07:34:58.1415032Z * [new branch] gh/PaulZhang12/17/base -> origin/gh/PaulZhang12/17/base 2025-09-07T07:34:58.1415390Z * [new branch] gh/PaulZhang12/17/head -> origin/gh/PaulZhang12/17/head 2025-09-07T07:34:58.1415741Z * [new branch] gh/PaulZhang12/20/base -> origin/gh/PaulZhang12/20/base 2025-09-07T07:34:58.1416096Z * [new branch] gh/PaulZhang12/20/head -> origin/gh/PaulZhang12/20/head 2025-09-07T07:34:58.1416448Z * [new branch] gh/PaulZhang12/20/orig -> origin/gh/PaulZhang12/20/orig 2025-09-07T07:34:58.1416798Z * [new branch] gh/PaulZhang12/21/base -> origin/gh/PaulZhang12/21/base 2025-09-07T07:34:58.1417149Z * [new branch] gh/PaulZhang12/21/head -> origin/gh/PaulZhang12/21/head 2025-09-07T07:34:58.1417501Z * [new branch] gh/PaulZhang12/21/orig -> origin/gh/PaulZhang12/21/orig 2025-09-07T07:34:58.1417851Z * [new branch] gh/PaulZhang12/22/base -> origin/gh/PaulZhang12/22/base 2025-09-07T07:34:58.1418230Z * [new branch] gh/PaulZhang12/22/head -> origin/gh/PaulZhang12/22/head 2025-09-07T07:34:58.1418577Z * [new branch] gh/PaulZhang12/22/orig -> origin/gh/PaulZhang12/22/orig 2025-09-07T07:34:58.1418922Z * [new branch] gh/PaulZhang12/23/base -> origin/gh/PaulZhang12/23/base 2025-09-07T07:34:58.1419266Z * [new branch] gh/PaulZhang12/23/head -> origin/gh/PaulZhang12/23/head 2025-09-07T07:34:58.1419595Z * [new branch] gh/PaulZhang12/23/orig -> origin/gh/PaulZhang12/23/orig 2025-09-07T07:34:58.1419927Z * [new branch] gh/PaulZhang12/24/base -> origin/gh/PaulZhang12/24/base 2025-09-07T07:34:58.1420258Z * [new branch] gh/PaulZhang12/24/head -> origin/gh/PaulZhang12/24/head 2025-09-07T07:34:58.1420596Z * [new branch] gh/PaulZhang12/24/orig -> origin/gh/PaulZhang12/24/orig 2025-09-07T07:34:58.1420931Z * [new branch] gh/PaulZhang12/25/base -> origin/gh/PaulZhang12/25/base 2025-09-07T07:34:58.1421486Z * [new branch] gh/PaulZhang12/25/head -> origin/gh/PaulZhang12/25/head 2025-09-07T07:34:58.1422200Z * [new branch] gh/PaulZhang12/25/orig -> origin/gh/PaulZhang12/25/orig 2025-09-07T07:34:58.1423511Z * [new branch] gh/SamGinzburg/11/base -> origin/gh/SamGinzburg/11/base 2025-09-07T07:34:58.1423894Z * [new branch] gh/SamGinzburg/11/head -> origin/gh/SamGinzburg/11/head 2025-09-07T07:34:58.1425586Z * [new branch] gh/Sidharth123-cpu/24/base -> origin/gh/Sidharth123-cpu/24/base 2025-09-07T07:34:58.1425967Z * [new branch] gh/Sidharth123-cpu/25/base -> origin/gh/Sidharth123-cpu/25/base 2025-09-07T07:34:58.1427218Z * [new branch] gh/Sidharth123-cpu/26/base -> origin/gh/Sidharth123-cpu/26/base 2025-09-07T07:34:58.1427909Z * [new branch] gh/Sidharth123-cpu/27/base -> origin/gh/Sidharth123-cpu/27/base 2025-09-07T07:34:58.1429158Z * [new branch] gh/StrongerXi/1/base -> origin/gh/StrongerXi/1/base 2025-09-07T07:34:58.1429502Z * [new branch] gh/StrongerXi/1/head -> origin/gh/StrongerXi/1/head 2025-09-07T07:34:58.1430644Z * [new branch] gh/StrongerXi/133/base -> origin/gh/StrongerXi/133/base 2025-09-07T07:34:58.1430994Z * [new branch] gh/StrongerXi/133/head -> origin/gh/StrongerXi/133/head 2025-09-07T07:34:58.1431616Z * [new branch] gh/StrongerXi/133/orig -> origin/gh/StrongerXi/133/orig 2025-09-07T07:34:58.1433384Z * [new branch] gh/StrongerXi/134/base -> origin/gh/StrongerXi/134/base 2025-09-07T07:34:58.1433817Z * [new branch] gh/StrongerXi/134/head -> origin/gh/StrongerXi/134/head 2025-09-07T07:34:58.1434212Z * [new branch] gh/StrongerXi/134/orig -> origin/gh/StrongerXi/134/orig 2025-09-07T07:34:58.1434856Z * [new branch] gh/StrongerXi/136/base -> origin/gh/StrongerXi/136/base 2025-09-07T07:34:58.1435448Z * [new branch] gh/StrongerXi/136/head -> origin/gh/StrongerXi/136/head 2025-09-07T07:34:58.1436146Z * [new branch] gh/StrongerXi/136/orig -> origin/gh/StrongerXi/136/orig 2025-09-07T07:34:58.1436961Z * [new branch] gh/StrongerXi/137/base -> origin/gh/StrongerXi/137/base 2025-09-07T07:34:58.1437585Z * [new branch] gh/StrongerXi/137/head -> origin/gh/StrongerXi/137/head 2025-09-07T07:34:58.1438258Z * [new branch] gh/StrongerXi/137/orig -> origin/gh/StrongerXi/137/orig 2025-09-07T07:34:58.1439038Z * [new branch] gh/StrongerXi/138/base -> origin/gh/StrongerXi/138/base 2025-09-07T07:34:58.1439667Z * [new branch] gh/StrongerXi/138/head -> origin/gh/StrongerXi/138/head 2025-09-07T07:34:58.1440333Z * [new branch] gh/StrongerXi/138/orig -> origin/gh/StrongerXi/138/orig 2025-09-07T07:34:58.1441221Z * [new branch] gh/StrongerXi/139/base -> origin/gh/StrongerXi/139/base 2025-09-07T07:34:58.1441849Z * [new branch] gh/StrongerXi/139/head -> origin/gh/StrongerXi/139/head 2025-09-07T07:34:58.1443587Z * [new branch] gh/StrongerXi/139/orig -> origin/gh/StrongerXi/139/orig 2025-09-07T07:34:58.1443990Z * [new branch] gh/StrongerXi/140/base -> origin/gh/StrongerXi/140/base 2025-09-07T07:34:58.1444368Z * [new branch] gh/StrongerXi/140/head -> origin/gh/StrongerXi/140/head 2025-09-07T07:34:58.1444793Z * [new branch] gh/StrongerXi/140/orig -> origin/gh/StrongerXi/140/orig 2025-09-07T07:34:58.1446917Z * [new branch] gh/StrongerXi/71/base -> origin/gh/StrongerXi/71/base 2025-09-07T07:34:58.1447345Z * [new branch] gh/StrongerXi/71/head -> origin/gh/StrongerXi/71/head 2025-09-07T07:34:58.1447706Z * [new branch] gh/StrongerXi/72/base -> origin/gh/StrongerXi/72/base 2025-09-07T07:34:58.1448224Z * [new branch] gh/StrongerXi/72/head -> origin/gh/StrongerXi/72/head 2025-09-07T07:34:58.1450457Z * [new branch] gh/XilunWu/133/base -> origin/gh/XilunWu/133/base 2025-09-07T07:34:58.1451011Z * [new branch] gh/XilunWu/133/head -> origin/gh/XilunWu/133/head 2025-09-07T07:34:58.1451722Z * [new branch] gh/XilunWu/133/orig -> origin/gh/XilunWu/133/orig 2025-09-07T07:34:58.1452223Z * [new branch] gh/XilunWu/139/base -> origin/gh/XilunWu/139/base 2025-09-07T07:34:58.1452685Z * [new branch] gh/XilunWu/139/head -> origin/gh/XilunWu/139/head 2025-09-07T07:34:58.1453149Z * [new branch] gh/XilunWu/139/orig -> origin/gh/XilunWu/139/orig 2025-09-07T07:34:58.1453823Z * [new branch] gh/XilunWu/143/base -> origin/gh/XilunWu/143/base 2025-09-07T07:34:58.1454663Z * [new branch] gh/XilunWu/143/head -> origin/gh/XilunWu/143/head 2025-09-07T07:34:58.1455142Z * [new branch] gh/XilunWu/143/orig -> origin/gh/XilunWu/143/orig 2025-09-07T07:34:58.1457816Z * [new branch] gh/XilunWu/144/base -> origin/gh/XilunWu/144/base 2025-09-07T07:34:58.1458386Z * [new branch] gh/XilunWu/144/head -> origin/gh/XilunWu/144/head 2025-09-07T07:34:58.1458875Z * [new branch] gh/XilunWu/144/orig -> origin/gh/XilunWu/144/orig 2025-09-07T07:34:58.1459221Z * [new branch] gh/XilunWu/145/base -> origin/gh/XilunWu/145/base 2025-09-07T07:34:58.1459554Z * [new branch] gh/XilunWu/145/head -> origin/gh/XilunWu/145/head 2025-09-07T07:34:58.1459889Z * [new branch] gh/XilunWu/145/orig -> origin/gh/XilunWu/145/orig 2025-09-07T07:34:58.1460501Z * [new branch] gh/XilunWu/146/base -> origin/gh/XilunWu/146/base 2025-09-07T07:34:58.1461191Z * [new branch] gh/XilunWu/146/head -> origin/gh/XilunWu/146/head 2025-09-07T07:34:58.1461866Z * [new branch] gh/XilunWu/146/orig -> origin/gh/XilunWu/146/orig 2025-09-07T07:34:58.1465296Z * [new branch] gh/XilunWu/147/base -> origin/gh/XilunWu/147/base 2025-09-07T07:34:58.1465763Z * [new branch] gh/XilunWu/147/head -> origin/gh/XilunWu/147/head 2025-09-07T07:34:58.1466151Z * [new branch] gh/XilunWu/147/orig -> origin/gh/XilunWu/147/orig 2025-09-07T07:34:58.1466513Z * [new branch] gh/XilunWu/148/base -> origin/gh/XilunWu/148/base 2025-09-07T07:34:58.1466878Z * [new branch] gh/XilunWu/148/head -> origin/gh/XilunWu/148/head 2025-09-07T07:34:58.1467225Z * [new branch] gh/XilunWu/148/orig -> origin/gh/XilunWu/148/orig 2025-09-07T07:34:58.1467734Z * [new branch] gh/XilunWu/149/base -> origin/gh/XilunWu/149/base 2025-09-07T07:34:58.1468496Z * [new branch] gh/XilunWu/149/head -> origin/gh/XilunWu/149/head 2025-09-07T07:34:58.1474184Z * [new branch] gh/XilunWu/149/orig -> origin/gh/XilunWu/149/orig 2025-09-07T07:34:58.1474710Z * [new branch] gh/XilunWu/150/base -> origin/gh/XilunWu/150/base 2025-09-07T07:34:58.1475178Z * [new branch] gh/XilunWu/150/head -> origin/gh/XilunWu/150/head 2025-09-07T07:34:58.1475984Z * [new branch] gh/XilunWu/150/orig -> origin/gh/XilunWu/150/orig 2025-09-07T07:34:58.1476396Z * [new branch] gh/XilunWu/151/base -> origin/gh/XilunWu/151/base 2025-09-07T07:34:58.1476727Z * [new branch] gh/XilunWu/151/head -> origin/gh/XilunWu/151/head 2025-09-07T07:34:58.1477062Z * [new branch] gh/XilunWu/151/orig -> origin/gh/XilunWu/151/orig 2025-09-07T07:34:58.1477393Z * [new branch] gh/XilunWu/152/base -> origin/gh/XilunWu/152/base 2025-09-07T07:34:58.1477731Z * [new branch] gh/XilunWu/152/head -> origin/gh/XilunWu/152/head 2025-09-07T07:34:58.1478061Z * [new branch] gh/XilunWu/152/orig -> origin/gh/XilunWu/152/orig 2025-09-07T07:34:58.1478381Z * [new branch] gh/XilunWu/153/base -> origin/gh/XilunWu/153/base 2025-09-07T07:34:58.1478843Z * [new branch] gh/XilunWu/153/head -> origin/gh/XilunWu/153/head 2025-09-07T07:34:58.1479199Z * [new branch] gh/XilunWu/153/orig -> origin/gh/XilunWu/153/orig 2025-09-07T07:34:58.1479538Z * [new branch] gh/XilunWu/160/base -> origin/gh/XilunWu/160/base 2025-09-07T07:34:58.1479877Z * [new branch] gh/XilunWu/160/head -> origin/gh/XilunWu/160/head 2025-09-07T07:34:58.1480384Z * [new branch] gh/XilunWu/160/orig -> origin/gh/XilunWu/160/orig 2025-09-07T07:34:58.1480729Z * [new branch] gh/XilunWu/161/base -> origin/gh/XilunWu/161/base 2025-09-07T07:34:58.1481091Z * [new branch] gh/XilunWu/161/head -> origin/gh/XilunWu/161/head 2025-09-07T07:34:58.1481698Z * [new branch] gh/XilunWu/161/orig -> origin/gh/XilunWu/161/orig 2025-09-07T07:34:58.1483035Z * [new branch] gh/XilunWu/163/base -> origin/gh/XilunWu/163/base 2025-09-07T07:34:58.1483846Z * [new branch] gh/XilunWu/163/head -> origin/gh/XilunWu/163/head 2025-09-07T07:34:58.1484525Z * [new branch] gh/XilunWu/163/orig -> origin/gh/XilunWu/163/orig 2025-09-07T07:34:58.1486432Z * [new branch] gh/XilunWu/164/base -> origin/gh/XilunWu/164/base 2025-09-07T07:34:58.1487117Z * [new branch] gh/XilunWu/164/head -> origin/gh/XilunWu/164/head 2025-09-07T07:34:58.1487511Z * [new branch] gh/XilunWu/164/orig -> origin/gh/XilunWu/164/orig 2025-09-07T07:34:58.1494324Z * [new branch] gh/XilunWu/165/base -> origin/gh/XilunWu/165/base 2025-09-07T07:34:58.1494795Z * [new branch] gh/XilunWu/165/head -> origin/gh/XilunWu/165/head 2025-09-07T07:34:58.1501045Z * [new branch] gh/XilunWu/165/orig -> origin/gh/XilunWu/165/orig 2025-09-07T07:34:58.1502996Z * [new branch] gh/XilunWu/166/base -> origin/gh/XilunWu/166/base 2025-09-07T07:34:58.1503426Z * [new branch] gh/XilunWu/166/head -> origin/gh/XilunWu/166/head 2025-09-07T07:34:58.1503788Z * [new branch] gh/XilunWu/166/orig -> origin/gh/XilunWu/166/orig 2025-09-07T07:34:58.1504183Z * [new branch] gh/XilunWu/167/base -> origin/gh/XilunWu/167/base 2025-09-07T07:34:58.1504536Z * [new branch] gh/XilunWu/167/head -> origin/gh/XilunWu/167/head 2025-09-07T07:34:58.1504928Z * [new branch] gh/XilunWu/167/orig -> origin/gh/XilunWu/167/orig 2025-09-07T07:34:58.1505442Z * [new branch] gh/XilunWu/168/base -> origin/gh/XilunWu/168/base 2025-09-07T07:34:58.1505792Z * [new branch] gh/XilunWu/168/head -> origin/gh/XilunWu/168/head 2025-09-07T07:34:58.1506142Z * [new branch] gh/XilunWu/168/orig -> origin/gh/XilunWu/168/orig 2025-09-07T07:34:58.1506504Z * [new branch] gh/XilunWu/169/base -> origin/gh/XilunWu/169/base 2025-09-07T07:34:58.1506877Z * [new branch] gh/XilunWu/169/head -> origin/gh/XilunWu/169/head 2025-09-07T07:34:58.1507231Z * [new branch] gh/XilunWu/169/orig -> origin/gh/XilunWu/169/orig 2025-09-07T07:34:58.1507589Z * [new branch] gh/XilunWu/170/base -> origin/gh/XilunWu/170/base 2025-09-07T07:34:58.1507963Z * [new branch] gh/XilunWu/170/head -> origin/gh/XilunWu/170/head 2025-09-07T07:34:58.1508313Z * [new branch] gh/XilunWu/170/orig -> origin/gh/XilunWu/170/orig 2025-09-07T07:34:58.1508692Z * [new branch] gh/XuehaiPan/14/base -> origin/gh/XuehaiPan/14/base 2025-09-07T07:34:58.1509060Z * [new branch] gh/XuehaiPan/14/head -> origin/gh/XuehaiPan/14/head 2025-09-07T07:34:58.1509416Z * [new branch] gh/XuehaiPan/14/orig -> origin/gh/XuehaiPan/14/orig 2025-09-07T07:34:58.1509906Z * [new branch] gh/XuehaiPan/179/base -> origin/gh/XuehaiPan/179/base 2025-09-07T07:34:58.1512658Z * [new branch] gh/XuehaiPan/179/head -> origin/gh/XuehaiPan/179/head 2025-09-07T07:34:58.1513047Z * [new branch] gh/XuehaiPan/179/orig -> origin/gh/XuehaiPan/179/orig 2025-09-07T07:34:58.1513414Z * [new branch] gh/XuehaiPan/189/base -> origin/gh/XuehaiPan/189/base 2025-09-07T07:34:58.1513753Z * [new branch] gh/XuehaiPan/189/head -> origin/gh/XuehaiPan/189/head 2025-09-07T07:34:58.1514093Z * [new branch] gh/XuehaiPan/189/orig -> origin/gh/XuehaiPan/189/orig 2025-09-07T07:34:58.1514428Z * [new branch] gh/XuehaiPan/232/base -> origin/gh/XuehaiPan/232/base 2025-09-07T07:34:58.1514774Z * [new branch] gh/XuehaiPan/232/head -> origin/gh/XuehaiPan/232/head 2025-09-07T07:34:58.1515110Z * [new branch] gh/XuehaiPan/232/orig -> origin/gh/XuehaiPan/232/orig 2025-09-07T07:34:58.1515457Z * [new branch] gh/XuehaiPan/249/base -> origin/gh/XuehaiPan/249/base 2025-09-07T07:34:58.1515793Z * [new branch] gh/XuehaiPan/249/head -> origin/gh/XuehaiPan/249/head 2025-09-07T07:34:58.1516121Z * [new branch] gh/XuehaiPan/249/orig -> origin/gh/XuehaiPan/249/orig 2025-09-07T07:34:58.1516459Z * [new branch] gh/XuehaiPan/253/base -> origin/gh/XuehaiPan/253/base 2025-09-07T07:34:58.1516800Z * [new branch] gh/XuehaiPan/253/head -> origin/gh/XuehaiPan/253/head 2025-09-07T07:34:58.1520703Z * [new branch] gh/XuehaiPan/253/orig -> origin/gh/XuehaiPan/253/orig 2025-09-07T07:34:58.1521176Z * [new branch] gh/XuehaiPan/254/base -> origin/gh/XuehaiPan/254/base 2025-09-07T07:34:58.1521552Z * [new branch] gh/XuehaiPan/254/head -> origin/gh/XuehaiPan/254/head 2025-09-07T07:34:58.1521956Z * [new branch] gh/XuehaiPan/254/orig -> origin/gh/XuehaiPan/254/orig 2025-09-07T07:34:58.1522338Z * [new branch] gh/XuehaiPan/255/base -> origin/gh/XuehaiPan/255/base 2025-09-07T07:34:58.1522733Z * [new branch] gh/XuehaiPan/255/head -> origin/gh/XuehaiPan/255/head 2025-09-07T07:34:58.1523109Z * [new branch] gh/XuehaiPan/255/orig -> origin/gh/XuehaiPan/255/orig 2025-09-07T07:34:58.1523483Z * [new branch] gh/XuehaiPan/257/base -> origin/gh/XuehaiPan/257/base 2025-09-07T07:34:58.1523898Z * [new branch] gh/XuehaiPan/257/head -> origin/gh/XuehaiPan/257/head 2025-09-07T07:34:58.1524521Z * [new branch] gh/XuehaiPan/257/orig -> origin/gh/XuehaiPan/257/orig 2025-09-07T07:34:58.1524917Z * [new branch] gh/XuehaiPan/271/base -> origin/gh/XuehaiPan/271/base 2025-09-07T07:34:58.1525282Z * [new branch] gh/XuehaiPan/271/head -> origin/gh/XuehaiPan/271/head 2025-09-07T07:34:58.1525648Z * [new branch] gh/XuehaiPan/271/orig -> origin/gh/XuehaiPan/271/orig 2025-09-07T07:34:58.1526024Z * [new branch] gh/XuehaiPan/290/base -> origin/gh/XuehaiPan/290/base 2025-09-07T07:34:58.1526387Z * [new branch] gh/XuehaiPan/290/head -> origin/gh/XuehaiPan/290/head 2025-09-07T07:34:58.1526838Z * [new branch] gh/XuehaiPan/290/orig -> origin/gh/XuehaiPan/290/orig 2025-09-07T07:34:58.1527220Z * [new branch] gh/XuehaiPan/343/base -> origin/gh/XuehaiPan/343/base 2025-09-07T07:34:58.1527594Z * [new branch] gh/XuehaiPan/343/head -> origin/gh/XuehaiPan/343/head 2025-09-07T07:34:58.1527971Z * [new branch] gh/XuehaiPan/343/orig -> origin/gh/XuehaiPan/343/orig 2025-09-07T07:34:58.1528325Z * [new branch] gh/XuehaiPan/347/base -> origin/gh/XuehaiPan/347/base 2025-09-07T07:34:58.1528704Z * [new branch] gh/XuehaiPan/347/head -> origin/gh/XuehaiPan/347/head 2025-09-07T07:34:58.1529123Z * [new branch] gh/XuehaiPan/347/orig -> origin/gh/XuehaiPan/347/orig 2025-09-07T07:34:58.1531444Z * [new branch] gh/XuehaiPan/348/base -> origin/gh/XuehaiPan/348/base 2025-09-07T07:34:58.1531822Z * [new branch] gh/XuehaiPan/348/head -> origin/gh/XuehaiPan/348/head 2025-09-07T07:34:58.1532185Z * [new branch] gh/XuehaiPan/348/orig -> origin/gh/XuehaiPan/348/orig 2025-09-07T07:34:58.1532557Z * [new branch] gh/XuehaiPan/350/base -> origin/gh/XuehaiPan/350/base 2025-09-07T07:34:58.1532921Z * [new branch] gh/XuehaiPan/350/head -> origin/gh/XuehaiPan/350/head 2025-09-07T07:34:58.1533287Z * [new branch] gh/XuehaiPan/350/orig -> origin/gh/XuehaiPan/350/orig 2025-09-07T07:34:58.1533646Z * [new branch] gh/XuehaiPan/356/base -> origin/gh/XuehaiPan/356/base 2025-09-07T07:34:58.1534011Z * [new branch] gh/XuehaiPan/356/head -> origin/gh/XuehaiPan/356/head 2025-09-07T07:34:58.1534378Z * [new branch] gh/XuehaiPan/356/orig -> origin/gh/XuehaiPan/356/orig 2025-09-07T07:34:58.1539149Z * [new branch] gh/XuehaiPan/357/base -> origin/gh/XuehaiPan/357/base 2025-09-07T07:34:58.1540006Z * [new branch] gh/XuehaiPan/357/head -> origin/gh/XuehaiPan/357/head 2025-09-07T07:34:58.1545956Z * [new branch] gh/XuehaiPan/357/orig -> origin/gh/XuehaiPan/357/orig 2025-09-07T07:34:58.1546438Z * [new branch] gh/XuehaiPan/358/base -> origin/gh/XuehaiPan/358/base 2025-09-07T07:34:58.1546878Z * [new branch] gh/XuehaiPan/358/head -> origin/gh/XuehaiPan/358/head 2025-09-07T07:34:58.1547285Z * [new branch] gh/XuehaiPan/358/orig -> origin/gh/XuehaiPan/358/orig 2025-09-07T07:34:58.1547632Z * [new branch] gh/XuehaiPan/359/base -> origin/gh/XuehaiPan/359/base 2025-09-07T07:34:58.1547954Z * [new branch] gh/XuehaiPan/359/head -> origin/gh/XuehaiPan/359/head 2025-09-07T07:34:58.1548310Z * [new branch] gh/XuehaiPan/359/orig -> origin/gh/XuehaiPan/359/orig 2025-09-07T07:34:58.1548643Z * [new branch] gh/XuehaiPan/360/base -> origin/gh/XuehaiPan/360/base 2025-09-07T07:34:58.1548974Z * [new branch] gh/XuehaiPan/360/head -> origin/gh/XuehaiPan/360/head 2025-09-07T07:34:58.1549304Z * [new branch] gh/XuehaiPan/360/orig -> origin/gh/XuehaiPan/360/orig 2025-09-07T07:34:58.1549622Z * [new branch] gh/XuehaiPan/365/base -> origin/gh/XuehaiPan/365/base 2025-09-07T07:34:58.1550166Z * [new branch] gh/XuehaiPan/365/head -> origin/gh/XuehaiPan/365/head 2025-09-07T07:34:58.1550499Z * [new branch] gh/XuehaiPan/365/orig -> origin/gh/XuehaiPan/365/orig 2025-09-07T07:34:58.1550846Z * [new branch] gh/XuehaiPan/366/base -> origin/gh/XuehaiPan/366/base 2025-09-07T07:34:58.1551188Z * [new branch] gh/XuehaiPan/366/head -> origin/gh/XuehaiPan/366/head 2025-09-07T07:34:58.1551519Z * [new branch] gh/XuehaiPan/369/base -> origin/gh/XuehaiPan/369/base 2025-09-07T07:34:58.1551859Z * [new branch] gh/XuehaiPan/369/head -> origin/gh/XuehaiPan/369/head 2025-09-07T07:34:58.1552194Z * [new branch] gh/XuehaiPan/369/orig -> origin/gh/XuehaiPan/369/orig 2025-09-07T07:34:58.1552524Z * [new branch] gh/XuehaiPan/370/base -> origin/gh/XuehaiPan/370/base 2025-09-07T07:34:58.1552855Z * [new branch] gh/XuehaiPan/370/head -> origin/gh/XuehaiPan/370/head 2025-09-07T07:34:58.1553185Z * [new branch] gh/XuehaiPan/370/orig -> origin/gh/XuehaiPan/370/orig 2025-09-07T07:34:58.1553698Z * [new branch] gh/XuehaiPan/380/base -> origin/gh/XuehaiPan/380/base 2025-09-07T07:34:58.1554198Z * [new branch] gh/XuehaiPan/380/head -> origin/gh/XuehaiPan/380/head 2025-09-07T07:34:58.1554700Z * [new branch] gh/XuehaiPan/380/orig -> origin/gh/XuehaiPan/380/orig 2025-09-07T07:34:58.1555049Z * [new branch] gh/XuehaiPan/381/base -> origin/gh/XuehaiPan/381/base 2025-09-07T07:34:58.1555429Z * [new branch] gh/XuehaiPan/381/head -> origin/gh/XuehaiPan/381/head 2025-09-07T07:34:58.1555756Z * [new branch] gh/XuehaiPan/382/base -> origin/gh/XuehaiPan/382/base 2025-09-07T07:34:58.1556091Z * [new branch] gh/XuehaiPan/382/head -> origin/gh/XuehaiPan/382/head 2025-09-07T07:34:58.1556432Z * [new branch] gh/XuehaiPan/382/orig -> origin/gh/XuehaiPan/382/orig 2025-09-07T07:34:58.1556765Z * [new branch] gh/XuehaiPan/383/base -> origin/gh/XuehaiPan/383/base 2025-09-07T07:34:58.1561632Z * [new branch] gh/XuehaiPan/383/head -> origin/gh/XuehaiPan/383/head 2025-09-07T07:34:58.1562201Z * [new branch] gh/XuehaiPan/383/orig -> origin/gh/XuehaiPan/383/orig 2025-09-07T07:34:58.1562729Z * [new branch] gh/XuehaiPan/384/base -> origin/gh/XuehaiPan/384/base 2025-09-07T07:34:58.1563088Z * [new branch] gh/XuehaiPan/384/head -> origin/gh/XuehaiPan/384/head 2025-09-07T07:34:58.1563448Z * [new branch] gh/XuehaiPan/384/orig -> origin/gh/XuehaiPan/384/orig 2025-09-07T07:34:58.1563789Z * [new branch] gh/XuehaiPan/385/base -> origin/gh/XuehaiPan/385/base 2025-09-07T07:34:58.1564131Z * [new branch] gh/XuehaiPan/385/head -> origin/gh/XuehaiPan/385/head 2025-09-07T07:34:58.1564467Z * [new branch] gh/XuehaiPan/385/orig -> origin/gh/XuehaiPan/385/orig 2025-09-07T07:34:58.1564863Z * [new branch] gh/XuehaiPan/386/base -> origin/gh/XuehaiPan/386/base 2025-09-07T07:34:58.1565228Z * [new branch] gh/XuehaiPan/386/head -> origin/gh/XuehaiPan/386/head 2025-09-07T07:34:58.1565599Z * [new branch] gh/XuehaiPan/386/orig -> origin/gh/XuehaiPan/386/orig 2025-09-07T07:34:58.1565967Z * [new branch] gh/XuehaiPan/387/base -> origin/gh/XuehaiPan/387/base 2025-09-07T07:34:58.1566325Z * [new branch] gh/XuehaiPan/387/head -> origin/gh/XuehaiPan/387/head 2025-09-07T07:34:58.1566951Z * [new branch] gh/XuehaiPan/387/orig -> origin/gh/XuehaiPan/387/orig 2025-09-07T07:34:58.1567475Z * [new branch] gh/ZainRizvi/1/base -> origin/gh/ZainRizvi/1/base 2025-09-07T07:34:58.1567855Z * [new branch] gh/ZainRizvi/1/head -> origin/gh/ZainRizvi/1/head 2025-09-07T07:34:58.1568362Z * [new branch] gh/ZainRizvi/2/base -> origin/gh/ZainRizvi/2/base 2025-09-07T07:34:58.1568730Z * [new branch] gh/ZainRizvi/2/head -> origin/gh/ZainRizvi/2/head 2025-09-07T07:34:58.1569058Z * [new branch] gh/ZainRizvi/3/base -> origin/gh/ZainRizvi/3/base 2025-09-07T07:34:58.1569395Z * [new branch] gh/ZainRizvi/3/head -> origin/gh/ZainRizvi/3/head 2025-09-07T07:34:58.1569753Z * [new branch] gh/ZainRizvi/4/base -> origin/gh/ZainRizvi/4/base 2025-09-07T07:34:58.1570532Z * [new branch] gh/ZainRizvi/4/head -> origin/gh/ZainRizvi/4/head 2025-09-07T07:34:58.1571314Z * [new branch] gh/ZainRizvi/5/base -> origin/gh/ZainRizvi/5/base 2025-09-07T07:34:58.1576404Z * [new branch] gh/ZainRizvi/5/head -> origin/gh/ZainRizvi/5/head 2025-09-07T07:34:58.1576837Z * [new branch] gh/ZainRizvi/6/base -> origin/gh/ZainRizvi/6/base 2025-09-07T07:34:58.1577190Z * [new branch] gh/ZainRizvi/6/head -> origin/gh/ZainRizvi/6/head 2025-09-07T07:34:58.1577523Z * [new branch] gh/ZainRizvi/6/orig -> origin/gh/ZainRizvi/6/orig 2025-09-07T07:34:58.1577857Z * [new branch] gh/ZainRizvi/7/base -> origin/gh/ZainRizvi/7/base 2025-09-07T07:34:58.1578304Z * [new branch] gh/ZainRizvi/7/head -> origin/gh/ZainRizvi/7/head 2025-09-07T07:34:58.1578639Z * [new branch] gh/ZainRizvi/7/orig -> origin/gh/ZainRizvi/7/orig 2025-09-07T07:34:58.1578966Z * [new branch] gh/ZainRizvi/8/base -> origin/gh/ZainRizvi/8/base 2025-09-07T07:34:58.1579289Z * [new branch] gh/ZainRizvi/8/head -> origin/gh/ZainRizvi/8/head 2025-09-07T07:34:58.1579619Z * [new branch] gh/ZainRizvi/9/base -> origin/gh/ZainRizvi/9/base 2025-09-07T07:34:58.1579938Z * [new branch] gh/ZainRizvi/9/head -> origin/gh/ZainRizvi/9/head 2025-09-07T07:34:58.1580258Z * [new branch] gh/ZainRizvi/9/orig -> origin/gh/ZainRizvi/9/orig 2025-09-07T07:34:58.1580606Z * [new branch] gh/ZhiweiYan-96/39/base -> origin/gh/ZhiweiYan-96/39/base 2025-09-07T07:34:58.1580992Z * [new branch] gh/ZhiweiYan-96/39/head -> origin/gh/ZhiweiYan-96/39/head 2025-09-07T07:34:58.1581793Z * [new branch] gh/ZhiweiYan-96/39/orig -> origin/gh/ZhiweiYan-96/39/orig 2025-09-07T07:34:58.1582613Z * [new branch] gh/ZhiweiYan-96/44/base -> origin/gh/ZhiweiYan-96/44/base 2025-09-07T07:34:58.1583272Z * [new branch] gh/ZhiweiYan-96/44/head -> origin/gh/ZhiweiYan-96/44/head 2025-09-07T07:34:58.1584145Z * [new branch] gh/ZhiweiYan-96/45/base -> origin/gh/ZhiweiYan-96/45/base 2025-09-07T07:34:58.1584620Z * [new branch] gh/ZhiweiYan-96/45/head -> origin/gh/ZhiweiYan-96/45/head 2025-09-07T07:34:58.1585840Z * [new branch] gh/ZhiweiYan-96/49/base -> origin/gh/ZhiweiYan-96/49/base 2025-09-07T07:34:58.1586557Z * [new branch] gh/ZhiweiYan-96/49/head -> origin/gh/ZhiweiYan-96/49/head 2025-09-07T07:34:58.1587493Z * [new branch] gh/ZhiweiYan-96/62/base -> origin/gh/ZhiweiYan-96/62/base 2025-09-07T07:34:58.1587993Z * [new branch] gh/ZhiweiYan-96/62/head -> origin/gh/ZhiweiYan-96/62/head 2025-09-07T07:34:58.1589557Z * [new branch] gh/ZhiweiYan-96/64/base -> origin/gh/ZhiweiYan-96/64/base 2025-09-07T07:34:58.1589910Z * [new branch] gh/ZhiweiYan-96/64/head -> origin/gh/ZhiweiYan-96/64/head 2025-09-07T07:34:58.1590251Z * [new branch] gh/ZhiweiYan-96/64/orig -> origin/gh/ZhiweiYan-96/64/orig 2025-09-07T07:34:58.1591348Z * [new branch] gh/ZhiweiYan-96/65/base -> origin/gh/ZhiweiYan-96/65/base 2025-09-07T07:34:58.1591792Z * [new branch] gh/ZhiweiYan-96/65/head -> origin/gh/ZhiweiYan-96/65/head 2025-09-07T07:34:58.1594071Z * [new branch] gh/ZhiweiYan-96/65/orig -> origin/gh/ZhiweiYan-96/65/orig 2025-09-07T07:34:58.1594539Z * [new branch] gh/ZhiweiYan-96/66/base -> origin/gh/ZhiweiYan-96/66/base 2025-09-07T07:34:58.1594954Z * [new branch] gh/ZhiweiYan-96/66/head -> origin/gh/ZhiweiYan-96/66/head 2025-09-07T07:34:58.1595317Z * [new branch] gh/ZhiweiYan-96/67/base -> origin/gh/ZhiweiYan-96/67/base 2025-09-07T07:34:58.1595697Z * [new branch] gh/ZhiweiYan-96/67/head -> origin/gh/ZhiweiYan-96/67/head 2025-09-07T07:34:58.1596070Z * [new branch] gh/ZhiweiYan-96/68/base -> origin/gh/ZhiweiYan-96/68/base 2025-09-07T07:34:58.1596489Z * [new branch] gh/ZhiweiYan-96/68/head -> origin/gh/ZhiweiYan-96/68/head 2025-09-07T07:34:58.1597325Z * [new branch] gh/ZhiweiYan-96/68/orig -> origin/gh/ZhiweiYan-96/68/orig 2025-09-07T07:34:58.1598360Z * [new branch] gh/aakhundov/1/base -> origin/gh/aakhundov/1/base 2025-09-07T07:34:58.1598956Z * [new branch] gh/aakhundov/1/head -> origin/gh/aakhundov/1/head 2025-09-07T07:34:58.1599689Z * [new branch] gh/aakhundov/2/base -> origin/gh/aakhundov/2/base 2025-09-07T07:34:58.1600317Z * [new branch] gh/aakhundov/2/head -> origin/gh/aakhundov/2/head 2025-09-07T07:34:58.1603897Z * [new branch] gh/aditew01/openblas -> origin/gh/aditew01/openblas 2025-09-07T07:34:58.1604320Z * [new branch] gh/aditew01/sbgemm -> origin/gh/aditew01/sbgemm 2025-09-07T07:34:58.1604733Z * [new branch] gh/aditew01/vecbf16 -> origin/gh/aditew01/vecbf16 2025-09-07T07:34:58.1605257Z * [new branch] gh/alexbrauckmann/paddedtensor_faketensor_init -> origin/gh/alexbrauckmann/paddedtensor_faketensor_init 2025-09-07T07:34:58.1605803Z * [new branch] gh/alexsamardzic/9/base -> origin/gh/alexsamardzic/9/base 2025-09-07T07:34:58.1606193Z * [new branch] gh/alexsamardzic/9/head -> origin/gh/alexsamardzic/9/head 2025-09-07T07:34:58.1606658Z * [new branch] gh/alexsamardzic/9/orig -> origin/gh/alexsamardzic/9/orig 2025-09-07T07:34:58.1611604Z * [new branch] gh/amjames/18/base -> origin/gh/amjames/18/base 2025-09-07T07:34:58.1611955Z * [new branch] gh/amjames/18/head -> origin/gh/amjames/18/head 2025-09-07T07:34:58.1612286Z * [new branch] gh/amjames/18/orig -> origin/gh/amjames/18/orig 2025-09-07T07:34:58.1612620Z * [new branch] gh/andrewor14/35/base -> origin/gh/andrewor14/35/base 2025-09-07T07:34:58.1612954Z * [new branch] gh/andrewor14/35/head -> origin/gh/andrewor14/35/head 2025-09-07T07:34:58.1613293Z * [new branch] gh/andrewor14/35/orig -> origin/gh/andrewor14/35/orig 2025-09-07T07:34:58.1613628Z * [new branch] gh/andrewor14/50/base -> origin/gh/andrewor14/50/base 2025-09-07T07:34:58.1616562Z * [new branch] gh/andrewor14/50/head -> origin/gh/andrewor14/50/head 2025-09-07T07:34:58.1621126Z * [new branch] gh/andrewor14/50/orig -> origin/gh/andrewor14/50/orig 2025-09-07T07:34:58.1625303Z * [new branch] gh/andrewor14/51/base -> origin/gh/andrewor14/51/base 2025-09-07T07:34:58.1628574Z * [new branch] gh/andrewor14/51/orig -> origin/gh/andrewor14/51/orig 2025-09-07T07:34:58.1628943Z * [new branch] gh/andyanwang/1/base -> origin/gh/andyanwang/1/base 2025-09-07T07:34:58.1629277Z * [new branch] gh/andyanwang/1/head -> origin/gh/andyanwang/1/head 2025-09-07T07:34:58.1629612Z * [new branch] gh/andyanwang/1/orig -> origin/gh/andyanwang/1/orig 2025-09-07T07:34:58.1629948Z * [new branch] gh/andyanwang/13/base -> origin/gh/andyanwang/13/base 2025-09-07T07:34:58.1630423Z * [new branch] gh/andyanwang/13/head -> origin/gh/andyanwang/13/head 2025-09-07T07:34:58.1630752Z * [new branch] gh/andyanwang/13/orig -> origin/gh/andyanwang/13/orig 2025-09-07T07:34:58.1631076Z * [new branch] gh/andyanwang/2/base -> origin/gh/andyanwang/2/base 2025-09-07T07:34:58.1631423Z * [new branch] gh/andyanwang/2/head -> origin/gh/andyanwang/2/head 2025-09-07T07:34:58.1631751Z * [new branch] gh/andyanwang/2/orig -> origin/gh/andyanwang/2/orig 2025-09-07T07:34:58.1632075Z * [new branch] gh/andyanwang/28/base -> origin/gh/andyanwang/28/base 2025-09-07T07:34:58.1632408Z * [new branch] gh/andyanwang/28/head -> origin/gh/andyanwang/28/head 2025-09-07T07:34:58.1632740Z * [new branch] gh/andyanwang/28/orig -> origin/gh/andyanwang/28/orig 2025-09-07T07:34:58.1633073Z * [new branch] gh/andyanwang/3/base -> origin/gh/andyanwang/3/base 2025-09-07T07:34:58.1633400Z * [new branch] gh/andyanwang/3/head -> origin/gh/andyanwang/3/head 2025-09-07T07:34:58.1633715Z * [new branch] gh/andyanwang/3/orig -> origin/gh/andyanwang/3/orig 2025-09-07T07:34:58.1634043Z * [new branch] gh/andyanwang/30/base -> origin/gh/andyanwang/30/base 2025-09-07T07:34:58.1634564Z * [new branch] gh/andyanwang/30/orig -> origin/gh/andyanwang/30/orig 2025-09-07T07:34:58.1634903Z * [new branch] gh/andyanwang/31/base -> origin/gh/andyanwang/31/base 2025-09-07T07:34:58.1635231Z * [new branch] gh/andyanwang/31/orig -> origin/gh/andyanwang/31/orig 2025-09-07T07:34:58.1635553Z * [new branch] gh/andyanwang/32/base -> origin/gh/andyanwang/32/base 2025-09-07T07:34:58.1635882Z * [new branch] gh/andyanwang/32/head -> origin/gh/andyanwang/32/head 2025-09-07T07:34:58.1636215Z * [new branch] gh/andyanwang/32/orig -> origin/gh/andyanwang/32/orig 2025-09-07T07:34:58.1636546Z * [new branch] gh/andyanwang/39/base -> origin/gh/andyanwang/39/base 2025-09-07T07:34:58.1637784Z * [new branch] gh/andyanwang/39/head -> origin/gh/andyanwang/39/head 2025-09-07T07:34:58.1638114Z * [new branch] gh/andyanwang/39/orig -> origin/gh/andyanwang/39/orig 2025-09-07T07:34:58.1638448Z * [new branch] gh/andyanwang/4/base -> origin/gh/andyanwang/4/base 2025-09-07T07:34:58.1638776Z * [new branch] gh/andyanwang/4/head -> origin/gh/andyanwang/4/head 2025-09-07T07:34:58.1639108Z * [new branch] gh/andyanwang/4/orig -> origin/gh/andyanwang/4/orig 2025-09-07T07:34:58.1641509Z * [new branch] gh/angelayi/107/base -> origin/gh/angelayi/107/base 2025-09-07T07:34:58.1641879Z * [new branch] gh/angelayi/107/head -> origin/gh/angelayi/107/head 2025-09-07T07:34:58.1642249Z * [new branch] gh/angelayi/111/base -> origin/gh/angelayi/111/base 2025-09-07T07:34:58.1642616Z * [new branch] gh/angelayi/111/head -> origin/gh/angelayi/111/head 2025-09-07T07:34:58.1642987Z * [new branch] gh/angelayi/111/orig -> origin/gh/angelayi/111/orig 2025-09-07T07:34:58.1643368Z * [new branch] gh/angelayi/112/base -> origin/gh/angelayi/112/base 2025-09-07T07:34:58.1643734Z * [new branch] gh/angelayi/112/head -> origin/gh/angelayi/112/head 2025-09-07T07:34:58.1644095Z * [new branch] gh/angelayi/112/orig -> origin/gh/angelayi/112/orig 2025-09-07T07:34:58.1645167Z * [new branch] gh/angelayi/113/base -> origin/gh/angelayi/113/base 2025-09-07T07:34:58.1645562Z * [new branch] gh/angelayi/113/head -> origin/gh/angelayi/113/head 2025-09-07T07:34:58.1647288Z * [new branch] gh/angelayi/113/orig -> origin/gh/angelayi/113/orig 2025-09-07T07:34:58.1647816Z * [new branch] gh/angelayi/114/base -> origin/gh/angelayi/114/base 2025-09-07T07:34:58.1648428Z * [new branch] gh/angelayi/114/head -> origin/gh/angelayi/114/head 2025-09-07T07:34:58.1653899Z * [new branch] gh/angelayi/114/orig -> origin/gh/angelayi/114/orig 2025-09-07T07:34:58.1654402Z * [new branch] gh/angelayi/115/base -> origin/gh/angelayi/115/base 2025-09-07T07:34:58.1654798Z * [new branch] gh/angelayi/115/head -> origin/gh/angelayi/115/head 2025-09-07T07:34:58.1655181Z * [new branch] gh/angelayi/115/orig -> origin/gh/angelayi/115/orig 2025-09-07T07:34:58.1655586Z * [new branch] gh/anijain2305/753/base -> origin/gh/anijain2305/753/base 2025-09-07T07:34:58.1655977Z * [new branch] gh/anijain2305/753/head -> origin/gh/anijain2305/753/head 2025-09-07T07:34:58.1656392Z * [new branch] gh/anijain2305/753/orig -> origin/gh/anijain2305/753/orig 2025-09-07T07:34:58.1656791Z * [new branch] gh/anijain2305/766/base -> origin/gh/anijain2305/766/base 2025-09-07T07:34:58.1657186Z * [new branch] gh/anijain2305/766/head -> origin/gh/anijain2305/766/head 2025-09-07T07:34:58.1657572Z * [new branch] gh/anijain2305/766/orig -> origin/gh/anijain2305/766/orig 2025-09-07T07:34:58.1658143Z * [new branch] gh/anijain2305/790/base -> origin/gh/anijain2305/790/base 2025-09-07T07:34:58.1658523Z * [new branch] gh/anijain2305/790/head -> origin/gh/anijain2305/790/head 2025-09-07T07:34:58.1658877Z * [new branch] gh/anijain2305/790/orig -> origin/gh/anijain2305/790/orig 2025-09-07T07:34:58.1659589Z * [new branch] gh/anijain2305/792/base -> origin/gh/anijain2305/792/base 2025-09-07T07:34:58.1664537Z * [new branch] gh/anijain2305/792/head -> origin/gh/anijain2305/792/head 2025-09-07T07:34:58.1664982Z * [new branch] gh/anijain2305/792/orig -> origin/gh/anijain2305/792/orig 2025-09-07T07:34:58.1665336Z * [new branch] gh/anijain2305/803/base -> origin/gh/anijain2305/803/base 2025-09-07T07:34:58.1665722Z * [new branch] gh/anijain2305/803/head -> origin/gh/anijain2305/803/head 2025-09-07T07:34:58.1666084Z * [new branch] gh/anijain2305/803/orig -> origin/gh/anijain2305/803/orig 2025-09-07T07:34:58.1666433Z * [new branch] gh/anijain2305/804/base -> origin/gh/anijain2305/804/base 2025-09-07T07:34:58.1667102Z * [new branch] gh/anijain2305/804/head -> origin/gh/anijain2305/804/head 2025-09-07T07:34:58.1667449Z * [new branch] gh/anijain2305/804/orig -> origin/gh/anijain2305/804/orig 2025-09-07T07:34:58.1667796Z * [new branch] gh/anijain2305/805/base -> origin/gh/anijain2305/805/base 2025-09-07T07:34:58.1668153Z * [new branch] gh/anijain2305/805/head -> origin/gh/anijain2305/805/head 2025-09-07T07:34:58.1668497Z * [new branch] gh/anijain2305/805/orig -> origin/gh/anijain2305/805/orig 2025-09-07T07:34:58.1668841Z * [new branch] gh/anijain2305/810/base -> origin/gh/anijain2305/810/base 2025-09-07T07:34:58.1672966Z * [new branch] gh/anijain2305/810/head -> origin/gh/anijain2305/810/head 2025-09-07T07:34:58.1673520Z * [new branch] gh/anijain2305/810/orig -> origin/gh/anijain2305/810/orig 2025-09-07T07:34:58.1674016Z * [new branch] gh/anijain2305/812/base -> origin/gh/anijain2305/812/base 2025-09-07T07:34:58.1674514Z * [new branch] gh/anijain2305/812/head -> origin/gh/anijain2305/812/head 2025-09-07T07:34:58.1674865Z * [new branch] gh/anijain2305/812/orig -> origin/gh/anijain2305/812/orig 2025-09-07T07:34:58.1675218Z * [new branch] gh/anijain2305/838/base -> origin/gh/anijain2305/838/base 2025-09-07T07:34:58.1675711Z * [new branch] gh/anijain2305/838/head -> origin/gh/anijain2305/838/head 2025-09-07T07:34:58.1676122Z * [new branch] gh/anijain2305/838/orig -> origin/gh/anijain2305/838/orig 2025-09-07T07:34:58.1677547Z * [new branch] gh/anijain2305/839/base -> origin/gh/anijain2305/839/base 2025-09-07T07:34:58.1678046Z * [new branch] gh/anijain2305/839/head -> origin/gh/anijain2305/839/head 2025-09-07T07:34:58.1678506Z * [new branch] gh/anijain2305/839/orig -> origin/gh/anijain2305/839/orig 2025-09-07T07:34:58.1679121Z * [new branch] gh/anijain2305/843/base -> origin/gh/anijain2305/843/base 2025-09-07T07:34:58.1679596Z * [new branch] gh/anijain2305/843/head -> origin/gh/anijain2305/843/head 2025-09-07T07:34:58.1679946Z * [new branch] gh/anijain2305/843/orig -> origin/gh/anijain2305/843/orig 2025-09-07T07:34:58.1680301Z * [new branch] gh/anijain2305/844/base -> origin/gh/anijain2305/844/base 2025-09-07T07:34:58.1680644Z * [new branch] gh/anijain2305/844/head -> origin/gh/anijain2305/844/head 2025-09-07T07:34:58.1682513Z * [new branch] gh/anijain2305/844/orig -> origin/gh/anijain2305/844/orig 2025-09-07T07:34:58.1682883Z * [new branch] gh/anijain2305/846/base -> origin/gh/anijain2305/846/base 2025-09-07T07:34:58.1683331Z * [new branch] gh/anijain2305/846/head -> origin/gh/anijain2305/846/head 2025-09-07T07:34:58.1683689Z * [new branch] gh/anijain2305/846/orig -> origin/gh/anijain2305/846/orig 2025-09-07T07:34:58.1684052Z * [new branch] gh/anijain2305/848/base -> origin/gh/anijain2305/848/base 2025-09-07T07:34:58.1684407Z * [new branch] gh/anijain2305/848/head -> origin/gh/anijain2305/848/head 2025-09-07T07:34:58.1684773Z * [new branch] gh/anijain2305/848/orig -> origin/gh/anijain2305/848/orig 2025-09-07T07:34:58.1692756Z * [new branch] gh/anijain2305/849/base -> origin/gh/anijain2305/849/base 2025-09-07T07:34:58.1693458Z * [new branch] gh/anijain2305/849/head -> origin/gh/anijain2305/849/head 2025-09-07T07:34:58.1693976Z * [new branch] gh/anijain2305/849/orig -> origin/gh/anijain2305/849/orig 2025-09-07T07:34:58.1694373Z * [new branch] gh/anijain2305/850/base -> origin/gh/anijain2305/850/base 2025-09-07T07:34:58.1694753Z * [new branch] gh/anijain2305/850/head -> origin/gh/anijain2305/850/head 2025-09-07T07:34:58.1695119Z * [new branch] gh/anijain2305/850/orig -> origin/gh/anijain2305/850/orig 2025-09-07T07:34:58.1695486Z * [new branch] gh/anijain2305/851/base -> origin/gh/anijain2305/851/base 2025-09-07T07:34:58.1695850Z * [new branch] gh/anijain2305/851/head -> origin/gh/anijain2305/851/head 2025-09-07T07:34:58.1696214Z * [new branch] gh/anijain2305/851/orig -> origin/gh/anijain2305/851/orig 2025-09-07T07:34:58.1696575Z * [new branch] gh/anijain2305/852/base -> origin/gh/anijain2305/852/base 2025-09-07T07:34:58.1696940Z * [new branch] gh/anijain2305/852/head -> origin/gh/anijain2305/852/head 2025-09-07T07:34:58.1697299Z * [new branch] gh/anijain2305/852/orig -> origin/gh/anijain2305/852/orig 2025-09-07T07:34:58.1697662Z * [new branch] gh/anijain2305/853/base -> origin/gh/anijain2305/853/base 2025-09-07T07:34:58.1698022Z * [new branch] gh/anijain2305/853/head -> origin/gh/anijain2305/853/head 2025-09-07T07:34:58.1698375Z * [new branch] gh/anijain2305/853/orig -> origin/gh/anijain2305/853/orig 2025-09-07T07:34:58.1698883Z * [new branch] gh/anijain2305/854/base -> origin/gh/anijain2305/854/base 2025-09-07T07:34:58.1699262Z * [new branch] gh/anijain2305/854/head -> origin/gh/anijain2305/854/head 2025-09-07T07:34:58.1699870Z * [new branch] gh/anijain2305/854/orig -> origin/gh/anijain2305/854/orig 2025-09-07T07:34:58.1700249Z * [new branch] gh/anijain2305/855/base -> origin/gh/anijain2305/855/base 2025-09-07T07:34:58.1700602Z * [new branch] gh/anijain2305/855/head -> origin/gh/anijain2305/855/head 2025-09-07T07:34:58.1700969Z * [new branch] gh/anijain2305/855/orig -> origin/gh/anijain2305/855/orig 2025-09-07T07:34:58.1701323Z * [new branch] gh/anijain2305/856/base -> origin/gh/anijain2305/856/base 2025-09-07T07:34:58.1701655Z * [new branch] gh/anijain2305/856/head -> origin/gh/anijain2305/856/head 2025-09-07T07:34:58.1701994Z * [new branch] gh/anijain2305/856/orig -> origin/gh/anijain2305/856/orig 2025-09-07T07:34:58.1702669Z * [new branch] gh/anijain2305/857/base -> origin/gh/anijain2305/857/base 2025-09-07T07:34:58.1703010Z * [new branch] gh/anijain2305/857/head -> origin/gh/anijain2305/857/head 2025-09-07T07:34:58.1703357Z * [new branch] gh/anijain2305/857/orig -> origin/gh/anijain2305/857/orig 2025-09-07T07:34:58.1703694Z * [new branch] gh/anijain2305/858/base -> origin/gh/anijain2305/858/base 2025-09-07T07:34:58.1704026Z * [new branch] gh/anijain2305/858/head -> origin/gh/anijain2305/858/head 2025-09-07T07:34:58.1704403Z * [new branch] gh/anijain2305/858/orig -> origin/gh/anijain2305/858/orig 2025-09-07T07:34:58.1708992Z * [new branch] gh/anijain2305/859/base -> origin/gh/anijain2305/859/base 2025-09-07T07:34:58.1709556Z * [new branch] gh/anijain2305/859/head -> origin/gh/anijain2305/859/head 2025-09-07T07:34:58.1710043Z * [new branch] gh/anijain2305/859/orig -> origin/gh/anijain2305/859/orig 2025-09-07T07:34:58.1710946Z * [new branch] gh/anijain2305/860/base -> origin/gh/anijain2305/860/base 2025-09-07T07:34:58.1711448Z * [new branch] gh/anijain2305/860/head -> origin/gh/anijain2305/860/head 2025-09-07T07:34:58.1711824Z * [new branch] gh/anijain2305/860/orig -> origin/gh/anijain2305/860/orig 2025-09-07T07:34:58.1712176Z * [new branch] gh/anijain2305/861/base -> origin/gh/anijain2305/861/base 2025-09-07T07:34:58.1712524Z * [new branch] gh/anijain2305/861/head -> origin/gh/anijain2305/861/head 2025-09-07T07:34:58.1712883Z * [new branch] gh/anijain2305/861/orig -> origin/gh/anijain2305/861/orig 2025-09-07T07:34:58.1713232Z * [new branch] gh/anijain2305/862/base -> origin/gh/anijain2305/862/base 2025-09-07T07:34:58.1713580Z * [new branch] gh/anijain2305/862/head -> origin/gh/anijain2305/862/head 2025-09-07T07:34:58.1713921Z * [new branch] gh/anijain2305/862/orig -> origin/gh/anijain2305/862/orig 2025-09-07T07:34:58.1714265Z * [new branch] gh/anijain2305/863/base -> origin/gh/anijain2305/863/base 2025-09-07T07:34:58.1715773Z * [new branch] gh/anijain2305/863/head -> origin/gh/anijain2305/863/head 2025-09-07T07:34:58.1716165Z * [new branch] gh/anijain2305/863/orig -> origin/gh/anijain2305/863/orig 2025-09-07T07:34:58.1716515Z * [new branch] gh/anijain2305/864/base -> origin/gh/anijain2305/864/base 2025-09-07T07:34:58.1716865Z * [new branch] gh/anijain2305/864/head -> origin/gh/anijain2305/864/head 2025-09-07T07:34:58.1717208Z * [new branch] gh/anijain2305/864/orig -> origin/gh/anijain2305/864/orig 2025-09-07T07:34:58.1717549Z * [new branch] gh/anijain2305/865/base -> origin/gh/anijain2305/865/base 2025-09-07T07:34:58.1720450Z * [new branch] gh/anijain2305/865/head -> origin/gh/anijain2305/865/head 2025-09-07T07:34:58.1720804Z * [new branch] gh/anijain2305/865/orig -> origin/gh/anijain2305/865/orig 2025-09-07T07:34:58.1721288Z * [new branch] gh/anijain2305/866/base -> origin/gh/anijain2305/866/base 2025-09-07T07:34:58.1721634Z * [new branch] gh/anijain2305/866/head -> origin/gh/anijain2305/866/head 2025-09-07T07:34:58.1721984Z * [new branch] gh/anijain2305/866/orig -> origin/gh/anijain2305/866/orig 2025-09-07T07:34:58.1722331Z * [new branch] gh/anjali411/216/base -> origin/gh/anjali411/216/base 2025-09-07T07:34:58.1722837Z * [new branch] gh/anjali411/216/head -> origin/gh/anjali411/216/head 2025-09-07T07:34:58.1723172Z * [new branch] gh/anjali411/216/orig -> origin/gh/anjali411/216/orig 2025-09-07T07:34:58.1723528Z * [new branch] gh/ankitageorge/13/base -> origin/gh/ankitageorge/13/base 2025-09-07T07:34:58.1724384Z * [new branch] gh/ankitageorge/13/head -> origin/gh/ankitageorge/13/head 2025-09-07T07:34:58.1724889Z * [new branch] gh/ankitageorge/13/orig -> origin/gh/ankitageorge/13/orig 2025-09-07T07:34:58.1725596Z * [new branch] gh/ankitageorge/14/base -> origin/gh/ankitageorge/14/base 2025-09-07T07:34:58.1726169Z * [new branch] gh/ankitageorge/14/head -> origin/gh/ankitageorge/14/head 2025-09-07T07:34:58.1727232Z * [new branch] gh/ankitageorge/14/orig -> origin/gh/ankitageorge/14/orig 2025-09-07T07:34:58.1728194Z * [new branch] gh/ankitageorge/15/base -> origin/gh/ankitageorge/15/base 2025-09-07T07:34:58.1728707Z * [new branch] gh/ankitageorge/15/head -> origin/gh/ankitageorge/15/head 2025-09-07T07:34:58.1729496Z * [new branch] gh/ankitageorge/15/orig -> origin/gh/ankitageorge/15/orig 2025-09-07T07:34:58.1730766Z * [new branch] gh/ankitageorge/16/base -> origin/gh/ankitageorge/16/base 2025-09-07T07:34:58.1731220Z * [new branch] gh/ankitageorge/16/head -> origin/gh/ankitageorge/16/head 2025-09-07T07:34:58.1731990Z * [new branch] gh/ankitageorge/16/orig -> origin/gh/ankitageorge/16/orig 2025-09-07T07:34:58.1733161Z * [new branch] gh/ankitageorge/17/base -> origin/gh/ankitageorge/17/base 2025-09-07T07:34:58.1733843Z * [new branch] gh/ankitageorge/17/head -> origin/gh/ankitageorge/17/head 2025-09-07T07:34:58.1734249Z * [new branch] gh/ankitageorge/17/orig -> origin/gh/ankitageorge/17/orig 2025-09-07T07:34:58.1735537Z * [new branch] gh/ankitageorge/21/base -> origin/gh/ankitageorge/21/base 2025-09-07T07:34:58.1736026Z * [new branch] gh/ankitageorge/21/head -> origin/gh/ankitageorge/21/head 2025-09-07T07:34:58.1736669Z * [new branch] gh/ankitageorge/21/orig -> origin/gh/ankitageorge/21/orig 2025-09-07T07:34:58.1738113Z * [new branch] gh/anshul-si/1/base -> origin/gh/anshul-si/1/base 2025-09-07T07:34:58.1738718Z * [new branch] gh/anshul-si/1/head -> origin/gh/anshul-si/1/head 2025-09-07T07:34:58.1739965Z * [new branch] gh/anshul-si/15/base -> origin/gh/anshul-si/15/base 2025-09-07T07:34:58.1740327Z * [new branch] gh/anshul-si/15/head -> origin/gh/anshul-si/15/head 2025-09-07T07:34:58.1741051Z * [new branch] gh/anshul-si/15/orig -> origin/gh/anshul-si/15/orig 2025-09-07T07:34:58.1742336Z * [new branch] gh/anshul-si/16/base -> origin/gh/anshul-si/16/base 2025-09-07T07:34:58.1742748Z * [new branch] gh/anshul-si/16/head -> origin/gh/anshul-si/16/head 2025-09-07T07:34:58.1743415Z * [new branch] gh/anshul-si/16/orig -> origin/gh/anshul-si/16/orig 2025-09-07T07:34:58.1744648Z * [new branch] gh/anshul-si/17/base -> origin/gh/anshul-si/17/base 2025-09-07T07:34:58.1745333Z * [new branch] gh/anshul-si/17/head -> origin/gh/anshul-si/17/head 2025-09-07T07:34:58.1746096Z * [new branch] gh/anshul-si/17/orig -> origin/gh/anshul-si/17/orig 2025-09-07T07:34:58.1747440Z * [new branch] gh/anshul-si/18/base -> origin/gh/anshul-si/18/base 2025-09-07T07:34:58.1747830Z * [new branch] gh/anshul-si/18/head -> origin/gh/anshul-si/18/head 2025-09-07T07:34:58.1748861Z * [new branch] gh/anshul-si/18/orig -> origin/gh/anshul-si/18/orig 2025-09-07T07:34:58.1749713Z * [new branch] gh/anshul-si/19/base -> origin/gh/anshul-si/19/base 2025-09-07T07:34:58.1750390Z * [new branch] gh/anshul-si/19/head -> origin/gh/anshul-si/19/head 2025-09-07T07:34:58.1751040Z * [new branch] gh/anshul-si/19/orig -> origin/gh/anshul-si/19/orig 2025-09-07T07:34:58.1751820Z * [new branch] gh/anshul-si/2/base -> origin/gh/anshul-si/2/base 2025-09-07T07:34:58.1752424Z * [new branch] gh/anshul-si/2/head -> origin/gh/anshul-si/2/head 2025-09-07T07:34:58.1753615Z * [new branch] gh/anshul-si/20/base -> origin/gh/anshul-si/20/base 2025-09-07T07:34:58.1754254Z * [new branch] gh/anshul-si/20/head -> origin/gh/anshul-si/20/head 2025-09-07T07:34:58.1754798Z * [new branch] gh/anshul-si/20/orig -> origin/gh/anshul-si/20/orig 2025-09-07T07:34:58.1755747Z * [new branch] gh/anshul-si/21/base -> origin/gh/anshul-si/21/base 2025-09-07T07:34:58.1756316Z * [new branch] gh/anshul-si/21/head -> origin/gh/anshul-si/21/head 2025-09-07T07:34:58.1756936Z * [new branch] gh/anshul-si/21/orig -> origin/gh/anshul-si/21/orig 2025-09-07T07:34:58.1758099Z * [new branch] gh/anshul-si/22/base -> origin/gh/anshul-si/22/base 2025-09-07T07:34:58.1758732Z * [new branch] gh/anshul-si/22/head -> origin/gh/anshul-si/22/head 2025-09-07T07:34:58.1759429Z * [new branch] gh/anshul-si/22/orig -> origin/gh/anshul-si/22/orig 2025-09-07T07:34:58.1760171Z * [new branch] gh/anshul-si/23/base -> origin/gh/anshul-si/23/base 2025-09-07T07:34:58.1760834Z * [new branch] gh/anshul-si/23/head -> origin/gh/anshul-si/23/head 2025-09-07T07:34:58.1761453Z * [new branch] gh/anshul-si/23/orig -> origin/gh/anshul-si/23/orig 2025-09-07T07:34:58.1762567Z * [new branch] gh/anshul-si/24/base -> origin/gh/anshul-si/24/base 2025-09-07T07:34:58.1763087Z * [new branch] gh/anshul-si/24/head -> origin/gh/anshul-si/24/head 2025-09-07T07:34:58.1763760Z * [new branch] gh/anshul-si/24/orig -> origin/gh/anshul-si/24/orig 2025-09-07T07:34:58.1764837Z * [new branch] gh/anshul-si/25/base -> origin/gh/anshul-si/25/base 2025-09-07T07:34:58.1765329Z * [new branch] gh/anshul-si/25/head -> origin/gh/anshul-si/25/head 2025-09-07T07:34:58.1766054Z * [new branch] gh/anshul-si/25/orig -> origin/gh/anshul-si/25/orig 2025-09-07T07:34:58.1767181Z * [new branch] gh/anshul-si/26/base -> origin/gh/anshul-si/26/base 2025-09-07T07:34:58.1767721Z * [new branch] gh/anshul-si/26/head -> origin/gh/anshul-si/26/head 2025-09-07T07:34:58.1768417Z * [new branch] gh/anshul-si/26/orig -> origin/gh/anshul-si/26/orig 2025-09-07T07:34:58.1769557Z * [new branch] gh/anshul-si/27/base -> origin/gh/anshul-si/27/base 2025-09-07T07:34:58.1769971Z * [new branch] gh/anshul-si/27/head -> origin/gh/anshul-si/27/head 2025-09-07T07:34:58.1770978Z * [new branch] gh/anshul-si/27/orig -> origin/gh/anshul-si/27/orig 2025-09-07T07:34:58.1771485Z * [new branch] gh/anshul-si/28/base -> origin/gh/anshul-si/28/base 2025-09-07T07:34:58.1772371Z * [new branch] gh/anshul-si/28/head -> origin/gh/anshul-si/28/head 2025-09-07T07:34:58.1772762Z * [new branch] gh/anshul-si/28/orig -> origin/gh/anshul-si/28/orig 2025-09-07T07:34:58.1773836Z * [new branch] gh/anshul-si/29/base -> origin/gh/anshul-si/29/base 2025-09-07T07:34:58.1774394Z * [new branch] gh/anshul-si/29/head -> origin/gh/anshul-si/29/head 2025-09-07T07:34:58.1775277Z * [new branch] gh/anshul-si/29/orig -> origin/gh/anshul-si/29/orig 2025-09-07T07:34:58.1776082Z * [new branch] gh/anshul-si/3/base -> origin/gh/anshul-si/3/base 2025-09-07T07:34:58.1776462Z * [new branch] gh/anshul-si/3/head -> origin/gh/anshul-si/3/head 2025-09-07T07:34:58.1777556Z * [new branch] gh/anshul-si/4/base -> origin/gh/anshul-si/4/base 2025-09-07T07:34:58.1778140Z * [new branch] gh/anshul-si/4/head -> origin/gh/anshul-si/4/head 2025-09-07T07:34:58.1779898Z * [new branch] gh/anshul-si/5/base -> origin/gh/anshul-si/5/base 2025-09-07T07:34:58.1780196Z * [new branch] gh/anshul-si/5/head -> origin/gh/anshul-si/5/head 2025-09-07T07:34:58.1780943Z * [new branch] gh/aorenste/132/base -> origin/gh/aorenste/132/base 2025-09-07T07:34:58.1781486Z * [new branch] gh/aorenste/132/head -> origin/gh/aorenste/132/head 2025-09-07T07:34:58.1782935Z * [new branch] gh/bdhirsh/650/base -> origin/gh/bdhirsh/650/base 2025-09-07T07:34:58.1783671Z * [new branch] gh/bdhirsh/650/head -> origin/gh/bdhirsh/650/head 2025-09-07T07:34:58.1784114Z * [new branch] gh/bdhirsh/650/orig -> origin/gh/bdhirsh/650/orig 2025-09-07T07:34:58.1786746Z * [new branch] gh/bdhirsh/663/base -> origin/gh/bdhirsh/663/base 2025-09-07T07:34:58.1786918Z * [new branch] gh/bdhirsh/663/head -> origin/gh/bdhirsh/663/head 2025-09-07T07:34:58.1787107Z * [new branch] gh/bdhirsh/663/orig -> origin/gh/bdhirsh/663/orig 2025-09-07T07:34:58.1793679Z * [new branch] gh/bdhirsh/665/base -> origin/gh/bdhirsh/665/base 2025-09-07T07:34:58.1798615Z * [new branch] gh/bdhirsh/665/head -> origin/gh/bdhirsh/665/head 2025-09-07T07:34:58.1800666Z * [new branch] gh/bdhirsh/665/orig -> origin/gh/bdhirsh/665/orig 2025-09-07T07:34:58.1800822Z * [new branch] gh/bdhirsh/666/base -> origin/gh/bdhirsh/666/base 2025-09-07T07:34:58.1801018Z * [new branch] gh/bdhirsh/666/head -> origin/gh/bdhirsh/666/head 2025-09-07T07:34:58.1801154Z * [new branch] gh/bdhirsh/666/orig -> origin/gh/bdhirsh/666/orig 2025-09-07T07:34:58.1801291Z * [new branch] gh/bdhirsh/667/base -> origin/gh/bdhirsh/667/base 2025-09-07T07:34:58.1801422Z * [new branch] gh/bdhirsh/667/head -> origin/gh/bdhirsh/667/head 2025-09-07T07:34:58.1801587Z * [new branch] gh/bdhirsh/667/orig -> origin/gh/bdhirsh/667/orig 2025-09-07T07:34:58.1801740Z * [new branch] gh/bdhirsh/668/base -> origin/gh/bdhirsh/668/base 2025-09-07T07:34:58.1801892Z * [new branch] gh/bdhirsh/668/head -> origin/gh/bdhirsh/668/head 2025-09-07T07:34:58.1802034Z * [new branch] gh/bdhirsh/668/orig -> origin/gh/bdhirsh/668/orig 2025-09-07T07:34:58.1802179Z * [new branch] gh/bdhirsh/669/base -> origin/gh/bdhirsh/669/base 2025-09-07T07:34:58.1802333Z * [new branch] gh/bdhirsh/669/head -> origin/gh/bdhirsh/669/head 2025-09-07T07:34:58.1802476Z * [new branch] gh/bdhirsh/669/orig -> origin/gh/bdhirsh/669/orig 2025-09-07T07:34:58.1802624Z * [new branch] gh/bdhirsh/670/base -> origin/gh/bdhirsh/670/base 2025-09-07T07:34:58.1802766Z * [new branch] gh/bdhirsh/670/head -> origin/gh/bdhirsh/670/head 2025-09-07T07:34:58.1802922Z * [new branch] gh/bdhirsh/670/orig -> origin/gh/bdhirsh/670/orig 2025-09-07T07:34:58.1803219Z * [new branch] gh/benjaminglass1/100/base -> origin/gh/benjaminglass1/100/base 2025-09-07T07:34:58.1803376Z * [new branch] gh/benjaminglass1/100/head -> origin/gh/benjaminglass1/100/head 2025-09-07T07:34:58.1803543Z * [new branch] gh/benjaminglass1/100/orig -> origin/gh/benjaminglass1/100/orig 2025-09-07T07:34:58.1803703Z * [new branch] gh/benjaminglass1/101/base -> origin/gh/benjaminglass1/101/base 2025-09-07T07:34:58.1803861Z * [new branch] gh/benjaminglass1/101/head -> origin/gh/benjaminglass1/101/head 2025-09-07T07:34:58.1804035Z * [new branch] gh/benjaminglass1/101/orig -> origin/gh/benjaminglass1/101/orig 2025-09-07T07:34:58.1805289Z * [new branch] gh/benjaminglass1/102/base -> origin/gh/benjaminglass1/102/base 2025-09-07T07:34:58.1805606Z * [new branch] gh/benjaminglass1/102/head -> origin/gh/benjaminglass1/102/head 2025-09-07T07:34:58.1806567Z * [new branch] gh/benjaminglass1/102/orig -> origin/gh/benjaminglass1/102/orig 2025-09-07T07:34:58.1807583Z * [new branch] gh/benjaminglass1/103/base -> origin/gh/benjaminglass1/103/base 2025-09-07T07:34:58.1811777Z * [new branch] gh/benjaminglass1/103/head -> origin/gh/benjaminglass1/103/head 2025-09-07T07:34:58.1811938Z * [new branch] gh/benjaminglass1/103/orig -> origin/gh/benjaminglass1/103/orig 2025-09-07T07:34:58.1812403Z * [new branch] gh/benjaminglass1/104/base -> origin/gh/benjaminglass1/104/base 2025-09-07T07:34:58.1812586Z * [new branch] gh/benjaminglass1/104/head -> origin/gh/benjaminglass1/104/head 2025-09-07T07:34:58.1812755Z * [new branch] gh/benjaminglass1/104/orig -> origin/gh/benjaminglass1/104/orig 2025-09-07T07:34:58.1812915Z * [new branch] gh/benjaminglass1/105/base -> origin/gh/benjaminglass1/105/base 2025-09-07T07:34:58.1813078Z * [new branch] gh/benjaminglass1/105/head -> origin/gh/benjaminglass1/105/head 2025-09-07T07:34:58.1816232Z * [new branch] gh/benjaminglass1/105/orig -> origin/gh/benjaminglass1/105/orig 2025-09-07T07:34:58.1816405Z * [new branch] gh/benjaminglass1/106/base -> origin/gh/benjaminglass1/106/base 2025-09-07T07:34:58.1816576Z * [new branch] gh/benjaminglass1/106/head -> origin/gh/benjaminglass1/106/head 2025-09-07T07:34:58.1816757Z * [new branch] gh/benjaminglass1/106/orig -> origin/gh/benjaminglass1/106/orig 2025-09-07T07:34:58.1816930Z * [new branch] gh/benjaminglass1/79/base -> origin/gh/benjaminglass1/79/base 2025-09-07T07:34:58.1819847Z * [new branch] gh/benjaminglass1/79/head -> origin/gh/benjaminglass1/79/head 2025-09-07T07:34:58.1820051Z * [new branch] gh/benjaminglass1/79/orig -> origin/gh/benjaminglass1/79/orig 2025-09-07T07:34:58.1820579Z * [new branch] gh/benjaminglass1/86/base -> origin/gh/benjaminglass1/86/base 2025-09-07T07:34:58.1820812Z * [new branch] gh/benjaminglass1/86/head -> origin/gh/benjaminglass1/86/head 2025-09-07T07:34:58.1820976Z * [new branch] gh/benjaminglass1/86/orig -> origin/gh/benjaminglass1/86/orig 2025-09-07T07:34:58.1824862Z * [new branch] gh/benjaminglass1/89/base -> origin/gh/benjaminglass1/89/base 2025-09-07T07:34:58.1825198Z * [new branch] gh/benjaminglass1/89/head -> origin/gh/benjaminglass1/89/head 2025-09-07T07:34:58.1825866Z * [new branch] gh/benjaminglass1/89/orig -> origin/gh/benjaminglass1/89/orig 2025-09-07T07:34:58.1826122Z * [new branch] gh/benjaminglass1/91/base -> origin/gh/benjaminglass1/91/base 2025-09-07T07:34:58.1826288Z * [new branch] gh/benjaminglass1/91/head -> origin/gh/benjaminglass1/91/head 2025-09-07T07:34:58.1826456Z * [new branch] gh/benjaminglass1/91/orig -> origin/gh/benjaminglass1/91/orig 2025-09-07T07:34:58.1826791Z * [new branch] gh/benjaminglass1/93/base -> origin/gh/benjaminglass1/93/base 2025-09-07T07:34:58.1826956Z * [new branch] gh/benjaminglass1/93/head -> origin/gh/benjaminglass1/93/head 2025-09-07T07:34:58.1829305Z * [new branch] gh/benjaminglass1/93/orig -> origin/gh/benjaminglass1/93/orig 2025-09-07T07:34:58.1829679Z * [new branch] gh/benjaminglass1/95/base -> origin/gh/benjaminglass1/95/base 2025-09-07T07:34:58.1829920Z * [new branch] gh/benjaminglass1/95/head -> origin/gh/benjaminglass1/95/head 2025-09-07T07:34:58.1830080Z * [new branch] gh/benjaminglass1/95/orig -> origin/gh/benjaminglass1/95/orig 2025-09-07T07:34:58.1830233Z * [new branch] gh/benjaminglass1/97/base -> origin/gh/benjaminglass1/97/base 2025-09-07T07:34:58.1833693Z * [new branch] gh/benjaminglass1/97/head -> origin/gh/benjaminglass1/97/head 2025-09-07T07:34:58.1833853Z * [new branch] gh/benjaminglass1/97/orig -> origin/gh/benjaminglass1/97/orig 2025-09-07T07:34:58.1834013Z * [new branch] gh/benjaminglass1/99/base -> origin/gh/benjaminglass1/99/base 2025-09-07T07:34:58.1834163Z * [new branch] gh/benjaminglass1/99/head -> origin/gh/benjaminglass1/99/head 2025-09-07T07:34:58.1834324Z * [new branch] gh/benjaminglass1/99/orig -> origin/gh/benjaminglass1/99/orig 2025-09-07T07:34:58.1834556Z * [new branch] gh/bobrenjc93/514/base -> origin/gh/bobrenjc93/514/base 2025-09-07T07:34:58.1835053Z * [new branch] gh/bobrenjc93/514/head -> origin/gh/bobrenjc93/514/head 2025-09-07T07:34:58.1835938Z * [new branch] gh/bobrenjc93/514/orig -> origin/gh/bobrenjc93/514/orig 2025-09-07T07:34:58.1836839Z * [new branch] gh/bobrenjc93/521/base -> origin/gh/bobrenjc93/521/base 2025-09-07T07:34:58.1837221Z * [new branch] gh/bobrenjc93/521/head -> origin/gh/bobrenjc93/521/head 2025-09-07T07:34:58.1838157Z * [new branch] gh/bobrenjc93/521/orig -> origin/gh/bobrenjc93/521/orig 2025-09-07T07:34:58.1840030Z * [new branch] gh/bobrenjc93/522/base -> origin/gh/bobrenjc93/522/base 2025-09-07T07:34:58.1840393Z * [new branch] gh/bobrenjc93/522/head -> origin/gh/bobrenjc93/522/head 2025-09-07T07:34:58.1840563Z * [new branch] gh/bobrenjc93/522/orig -> origin/gh/bobrenjc93/522/orig 2025-09-07T07:34:58.1841187Z * [new branch] gh/bobrenjc93/525/base -> origin/gh/bobrenjc93/525/base 2025-09-07T07:34:58.1843338Z * [new branch] gh/bobrenjc93/525/head -> origin/gh/bobrenjc93/525/head 2025-09-07T07:34:58.1843555Z * [new branch] gh/bobrenjc93/525/orig -> origin/gh/bobrenjc93/525/orig 2025-09-07T07:34:58.1843733Z * [new branch] gh/bobrenjc93/526/base -> origin/gh/bobrenjc93/526/base 2025-09-07T07:34:58.1844114Z * [new branch] gh/bobrenjc93/526/head -> origin/gh/bobrenjc93/526/head 2025-09-07T07:34:58.1845182Z * [new branch] gh/bobrenjc93/526/orig -> origin/gh/bobrenjc93/526/orig 2025-09-07T07:34:58.1848251Z * [new branch] gh/bobrenjc93/527/base -> origin/gh/bobrenjc93/527/base 2025-09-07T07:34:58.1848629Z * [new branch] gh/bobrenjc93/527/head -> origin/gh/bobrenjc93/527/head 2025-09-07T07:34:58.1850032Z * [new branch] gh/bobrenjc93/527/orig -> origin/gh/bobrenjc93/527/orig 2025-09-07T07:34:58.1850352Z * [new branch] gh/bobrenjc93/528/base -> origin/gh/bobrenjc93/528/base 2025-09-07T07:34:58.1855349Z * [new branch] gh/bobrenjc93/528/head -> origin/gh/bobrenjc93/528/head 2025-09-07T07:34:58.1855540Z * [new branch] gh/bobrenjc93/528/orig -> origin/gh/bobrenjc93/528/orig 2025-09-07T07:34:58.1855680Z * [new branch] gh/bobrenjc93/529/base -> origin/gh/bobrenjc93/529/base 2025-09-07T07:34:58.1855821Z * [new branch] gh/bobrenjc93/529/head -> origin/gh/bobrenjc93/529/head 2025-09-07T07:34:58.1856274Z * [new branch] gh/bobrenjc93/529/orig -> origin/gh/bobrenjc93/529/orig 2025-09-07T07:34:58.1856408Z * [new branch] gh/bobrenjc93/535/base -> origin/gh/bobrenjc93/535/base 2025-09-07T07:34:58.1856549Z * [new branch] gh/bobrenjc93/535/head -> origin/gh/bobrenjc93/535/head 2025-09-07T07:34:58.1856856Z * [new branch] gh/bobrenjc93/535/orig -> origin/gh/bobrenjc93/535/orig 2025-09-07T07:34:58.1857021Z * [new branch] gh/bobrenjc93/537/base -> origin/gh/bobrenjc93/537/base 2025-09-07T07:34:58.1857975Z * [new branch] gh/bobrenjc93/537/head -> origin/gh/bobrenjc93/537/head 2025-09-07T07:34:58.1858132Z * [new branch] gh/bobrenjc93/537/orig -> origin/gh/bobrenjc93/537/orig 2025-09-07T07:34:58.1862569Z * [new branch] gh/bobrenjc93/539/base -> origin/gh/bobrenjc93/539/base 2025-09-07T07:34:58.1862766Z * [new branch] gh/bobrenjc93/539/head -> origin/gh/bobrenjc93/539/head 2025-09-07T07:34:58.1862914Z * [new branch] gh/bobrenjc93/539/orig -> origin/gh/bobrenjc93/539/orig 2025-09-07T07:34:58.1863047Z * [new branch] gh/bobrenjc93/540/base -> origin/gh/bobrenjc93/540/base 2025-09-07T07:34:58.1863183Z * [new branch] gh/bobrenjc93/540/head -> origin/gh/bobrenjc93/540/head 2025-09-07T07:34:58.1863511Z * [new branch] gh/bobrenjc93/540/orig -> origin/gh/bobrenjc93/540/orig 2025-09-07T07:34:58.1863693Z * [new branch] gh/bobrenjc93/541/base -> origin/gh/bobrenjc93/541/base 2025-09-07T07:34:58.1867176Z * [new branch] gh/bobrenjc93/541/head -> origin/gh/bobrenjc93/541/head 2025-09-07T07:34:58.1867456Z * [new branch] gh/bobrenjc93/541/orig -> origin/gh/bobrenjc93/541/orig 2025-09-07T07:34:58.1876123Z * [new branch] gh/bobrenjc93/542/base -> origin/gh/bobrenjc93/542/base 2025-09-07T07:34:58.1881566Z * [new branch] gh/bobrenjc93/542/head -> origin/gh/bobrenjc93/542/head 2025-09-07T07:34:58.1883930Z * [new branch] gh/bobrenjc93/542/orig -> origin/gh/bobrenjc93/542/orig 2025-09-07T07:34:58.1884133Z * [new branch] gh/bobrenjc93/543/base -> origin/gh/bobrenjc93/543/base 2025-09-07T07:34:58.1884294Z * [new branch] gh/bobrenjc93/543/head -> origin/gh/bobrenjc93/543/head 2025-09-07T07:34:58.1884447Z * [new branch] gh/bobrenjc93/543/orig -> origin/gh/bobrenjc93/543/orig 2025-09-07T07:34:58.1884591Z * [new branch] gh/bobrenjc93/544/base -> origin/gh/bobrenjc93/544/base 2025-09-07T07:34:58.1884736Z * [new branch] gh/bobrenjc93/544/head -> origin/gh/bobrenjc93/544/head 2025-09-07T07:34:58.1884887Z * [new branch] gh/bobrenjc93/544/orig -> origin/gh/bobrenjc93/544/orig 2025-09-07T07:34:58.1885035Z * [new branch] gh/bobrenjc93/545/base -> origin/gh/bobrenjc93/545/base 2025-09-07T07:34:58.1885190Z * [new branch] gh/bobrenjc93/545/head -> origin/gh/bobrenjc93/545/head 2025-09-07T07:34:58.1885343Z * [new branch] gh/bobrenjc93/545/orig -> origin/gh/bobrenjc93/545/orig 2025-09-07T07:34:58.1885509Z * [new branch] gh/bobrenjc93/546/base -> origin/gh/bobrenjc93/546/base 2025-09-07T07:34:58.1885667Z * [new branch] gh/bobrenjc93/546/head -> origin/gh/bobrenjc93/546/head 2025-09-07T07:34:58.1885822Z * [new branch] gh/bobrenjc93/546/orig -> origin/gh/bobrenjc93/546/orig 2025-09-07T07:34:58.1885992Z * [new branch] gh/bobrenjc93/547/base -> origin/gh/bobrenjc93/547/base 2025-09-07T07:34:58.1886145Z * [new branch] gh/bobrenjc93/547/head -> origin/gh/bobrenjc93/547/head 2025-09-07T07:34:58.1886302Z * [new branch] gh/bobrenjc93/547/orig -> origin/gh/bobrenjc93/547/orig 2025-09-07T07:34:58.1886604Z * [new branch] gh/bobrenjc93/548/base -> origin/gh/bobrenjc93/548/base 2025-09-07T07:34:58.1886953Z * [new branch] gh/bobrenjc93/548/head -> origin/gh/bobrenjc93/548/head 2025-09-07T07:34:58.1887126Z * [new branch] gh/bobrenjc93/548/orig -> origin/gh/bobrenjc93/548/orig 2025-09-07T07:34:58.1887284Z * [new branch] gh/bobrenjc93/549/base -> origin/gh/bobrenjc93/549/base 2025-09-07T07:34:58.1887441Z * [new branch] gh/bobrenjc93/549/head -> origin/gh/bobrenjc93/549/head 2025-09-07T07:34:58.1887589Z * [new branch] gh/bobrenjc93/549/orig -> origin/gh/bobrenjc93/549/orig 2025-09-07T07:34:58.1887748Z * [new branch] gh/bobrenjc93/550/base -> origin/gh/bobrenjc93/550/base 2025-09-07T07:34:58.1887898Z * [new branch] gh/bobrenjc93/550/head -> origin/gh/bobrenjc93/550/head 2025-09-07T07:34:58.1888049Z * [new branch] gh/bobrenjc93/550/orig -> origin/gh/bobrenjc93/550/orig 2025-09-07T07:34:58.1888221Z * [new branch] gh/bobrenjc93/551/base -> origin/gh/bobrenjc93/551/base 2025-09-07T07:34:58.1888372Z * [new branch] gh/bobrenjc93/551/head -> origin/gh/bobrenjc93/551/head 2025-09-07T07:34:58.1888524Z * [new branch] gh/bobrenjc93/551/orig -> origin/gh/bobrenjc93/551/orig 2025-09-07T07:34:58.1894809Z * [new branch] gh/bobrenjc93/552/base -> origin/gh/bobrenjc93/552/base 2025-09-07T07:34:58.1894965Z * [new branch] gh/bobrenjc93/552/head -> origin/gh/bobrenjc93/552/head 2025-09-07T07:34:58.1895197Z * [new branch] gh/bobrenjc93/552/orig -> origin/gh/bobrenjc93/552/orig 2025-09-07T07:34:58.1895335Z * [new branch] gh/bobrenjc93/553/base -> origin/gh/bobrenjc93/553/base 2025-09-07T07:34:58.1895492Z * [new branch] gh/bobrenjc93/553/head -> origin/gh/bobrenjc93/553/head 2025-09-07T07:34:58.1895632Z * [new branch] gh/bobrenjc93/553/orig -> origin/gh/bobrenjc93/553/orig 2025-09-07T07:34:58.1895780Z * [new branch] gh/bobrenjc93/554/base -> origin/gh/bobrenjc93/554/base 2025-09-07T07:34:58.1895917Z * [new branch] gh/bobrenjc93/554/head -> origin/gh/bobrenjc93/554/head 2025-09-07T07:34:58.1896061Z * [new branch] gh/bobrenjc93/554/orig -> origin/gh/bobrenjc93/554/orig 2025-09-07T07:34:58.1896202Z * [new branch] gh/bobrenjc93/555/base -> origin/gh/bobrenjc93/555/base 2025-09-07T07:34:58.1902332Z * [new branch] gh/bobrenjc93/555/head -> origin/gh/bobrenjc93/555/head 2025-09-07T07:34:58.1904367Z * [new branch] gh/bobrenjc93/555/orig -> origin/gh/bobrenjc93/555/orig 2025-09-07T07:34:58.1904664Z * [new branch] gh/bobrenjc93/556/base -> origin/gh/bobrenjc93/556/base 2025-09-07T07:34:58.1904988Z * [new branch] gh/bobrenjc93/556/head -> origin/gh/bobrenjc93/556/head 2025-09-07T07:34:58.1905165Z * [new branch] gh/bobrenjc93/556/orig -> origin/gh/bobrenjc93/556/orig 2025-09-07T07:34:58.1905333Z * [new branch] gh/briancoutinho/2/base -> origin/gh/briancoutinho/2/base 2025-09-07T07:34:58.1905493Z * [new branch] gh/briancoutinho/2/head -> origin/gh/briancoutinho/2/head 2025-09-07T07:34:58.1905637Z * [new branch] gh/c00w/23/base -> origin/gh/c00w/23/base 2025-09-07T07:34:58.1905783Z * [new branch] gh/c00w/23/head -> origin/gh/c00w/23/head 2025-09-07T07:34:58.1905906Z * [new branch] gh/c00w/48/base -> origin/gh/c00w/48/base 2025-09-07T07:34:58.1906023Z * [new branch] gh/c00w/48/head -> origin/gh/c00w/48/head 2025-09-07T07:34:58.1906145Z * [new branch] gh/c00w/48/orig -> origin/gh/c00w/48/orig 2025-09-07T07:34:58.1906261Z * [new branch] gh/c00w/53/base -> origin/gh/c00w/53/base 2025-09-07T07:34:58.1906529Z * [new branch] gh/c00w/53/head -> origin/gh/c00w/53/head 2025-09-07T07:34:58.1906648Z * [new branch] gh/c00w/53/orig -> origin/gh/c00w/53/orig 2025-09-07T07:34:58.1906762Z * [new branch] gh/c00w/54/base -> origin/gh/c00w/54/base 2025-09-07T07:34:58.1906895Z * [new branch] gh/c00w/54/head -> origin/gh/c00w/54/head 2025-09-07T07:34:58.1911335Z * [new branch] gh/c00w/54/orig -> origin/gh/c00w/54/orig 2025-09-07T07:34:58.1913414Z * [new branch] gh/c00w/55/base -> origin/gh/c00w/55/base 2025-09-07T07:34:58.1913675Z * [new branch] gh/c00w/55/head -> origin/gh/c00w/55/head 2025-09-07T07:34:58.1919235Z * [new branch] gh/c00w/55/orig -> origin/gh/c00w/55/orig 2025-09-07T07:34:58.1922227Z * [new branch] gh/c00w/56/base -> origin/gh/c00w/56/base 2025-09-07T07:34:58.1922423Z * [new branch] gh/c00w/56/head -> origin/gh/c00w/56/head 2025-09-07T07:34:58.1922549Z * [new branch] gh/c00w/56/orig -> origin/gh/c00w/56/orig 2025-09-07T07:34:58.1922693Z * [new branch] gh/clee2000/1/base -> origin/gh/clee2000/1/base 2025-09-07T07:34:58.1922985Z * [new branch] gh/clee2000/1/head -> origin/gh/clee2000/1/head 2025-09-07T07:34:58.1923120Z * [new branch] gh/clee2000/1/orig -> origin/gh/clee2000/1/orig 2025-09-07T07:34:58.1923286Z * [new branch] gh/coconutruben/1/base -> origin/gh/coconutruben/1/base 2025-09-07T07:34:58.1923434Z * [new branch] gh/coconutruben/1/head -> origin/gh/coconutruben/1/head 2025-09-07T07:34:58.1923599Z * [new branch] gh/coconutruben/11/base -> origin/gh/coconutruben/11/base 2025-09-07T07:34:58.1923752Z * [new branch] gh/coconutruben/11/head -> origin/gh/coconutruben/11/head 2025-09-07T07:34:58.1923906Z * [new branch] gh/coconutruben/11/orig -> origin/gh/coconutruben/11/orig 2025-09-07T07:34:58.1924061Z * [new branch] gh/coconutruben/12/base -> origin/gh/coconutruben/12/base 2025-09-07T07:34:58.1924206Z * [new branch] gh/coconutruben/12/head -> origin/gh/coconutruben/12/head 2025-09-07T07:34:58.1924363Z * [new branch] gh/coconutruben/12/orig -> origin/gh/coconutruben/12/orig 2025-09-07T07:34:58.1924507Z * [new branch] gh/coconutruben/13/base -> origin/gh/coconutruben/13/base 2025-09-07T07:34:58.1924659Z * [new branch] gh/coconutruben/13/head -> origin/gh/coconutruben/13/head 2025-09-07T07:34:58.1924808Z * [new branch] gh/coconutruben/13/orig -> origin/gh/coconutruben/13/orig 2025-09-07T07:34:58.1924951Z * [new branch] gh/coconutruben/14/base -> origin/gh/coconutruben/14/base 2025-09-07T07:34:58.1925104Z * [new branch] gh/coconutruben/14/head -> origin/gh/coconutruben/14/head 2025-09-07T07:34:58.1925275Z * [new branch] gh/coconutruben/14/orig -> origin/gh/coconutruben/14/orig 2025-09-07T07:34:58.1930048Z * [new branch] gh/coconutruben/15/base -> origin/gh/coconutruben/15/base 2025-09-07T07:34:58.1935578Z * [new branch] gh/coconutruben/15/head -> origin/gh/coconutruben/15/head 2025-09-07T07:34:58.1940685Z * [new branch] gh/coconutruben/15/orig -> origin/gh/coconutruben/15/orig 2025-09-07T07:34:58.1946191Z * [new branch] gh/coconutruben/16/base -> origin/gh/coconutruben/16/base 2025-09-07T07:34:58.1953290Z * [new branch] gh/coconutruben/16/head -> origin/gh/coconutruben/16/head 2025-09-07T07:34:58.1953669Z * [new branch] gh/coconutruben/16/orig -> origin/gh/coconutruben/16/orig 2025-09-07T07:34:58.1953874Z * [new branch] gh/coconutruben/17/base -> origin/gh/coconutruben/17/base 2025-09-07T07:34:58.1954384Z * [new branch] gh/coconutruben/17/head -> origin/gh/coconutruben/17/head 2025-09-07T07:34:58.1954595Z * [new branch] gh/coconutruben/17/orig -> origin/gh/coconutruben/17/orig 2025-09-07T07:34:58.1954745Z * [new branch] gh/coconutruben/18/base -> origin/gh/coconutruben/18/base 2025-09-07T07:34:58.1954913Z * [new branch] gh/coconutruben/18/head -> origin/gh/coconutruben/18/head 2025-09-07T07:34:58.1955071Z * [new branch] gh/coconutruben/18/orig -> origin/gh/coconutruben/18/orig 2025-09-07T07:34:58.1955226Z * [new branch] gh/coconutruben/19/base -> origin/gh/coconutruben/19/base 2025-09-07T07:34:58.1955373Z * [new branch] gh/coconutruben/19/head -> origin/gh/coconutruben/19/head 2025-09-07T07:34:58.1955550Z * [new branch] gh/coconutruben/19/orig -> origin/gh/coconutruben/19/orig 2025-09-07T07:34:58.1955700Z * [new branch] gh/coconutruben/20/base -> origin/gh/coconutruben/20/base 2025-09-07T07:34:58.1955847Z * [new branch] gh/coconutruben/20/head -> origin/gh/coconutruben/20/head 2025-09-07T07:34:58.1956004Z * [new branch] gh/coconutruben/20/orig -> origin/gh/coconutruben/20/orig 2025-09-07T07:34:58.1956154Z * [new branch] gh/coconutruben/21/base -> origin/gh/coconutruben/21/base 2025-09-07T07:34:58.1956369Z * [new branch] gh/coconutruben/21/head -> origin/gh/coconutruben/21/head 2025-09-07T07:34:58.1956514Z * [new branch] gh/coconutruben/21/orig -> origin/gh/coconutruben/21/orig 2025-09-07T07:34:58.1956665Z * [new branch] gh/coconutruben/22/base -> origin/gh/coconutruben/22/base 2025-09-07T07:34:58.1956808Z * [new branch] gh/coconutruben/22/head -> origin/gh/coconutruben/22/head 2025-09-07T07:34:58.1956954Z * [new branch] gh/coconutruben/22/orig -> origin/gh/coconutruben/22/orig 2025-09-07T07:34:58.1957107Z * [new branch] gh/coconutruben/24/base -> origin/gh/coconutruben/24/base 2025-09-07T07:34:58.1957251Z * [new branch] gh/coconutruben/24/head -> origin/gh/coconutruben/24/head 2025-09-07T07:34:58.1957401Z * [new branch] gh/coconutruben/24/orig -> origin/gh/coconutruben/24/orig 2025-09-07T07:34:58.1957549Z * [new branch] gh/coconutruben/25/base -> origin/gh/coconutruben/25/base 2025-09-07T07:34:58.1957696Z * [new branch] gh/coconutruben/25/head -> origin/gh/coconutruben/25/head 2025-09-07T07:34:58.1957840Z * [new branch] gh/coconutruben/25/orig -> origin/gh/coconutruben/25/orig 2025-09-07T07:34:58.1957982Z * [new branch] gh/coconutruben/28/base -> origin/gh/coconutruben/28/base 2025-09-07T07:34:58.1958130Z * [new branch] gh/coconutruben/28/head -> origin/gh/coconutruben/28/head 2025-09-07T07:34:58.1958275Z * [new branch] gh/coconutruben/28/orig -> origin/gh/coconutruben/28/orig 2025-09-07T07:34:58.1958441Z * [new branch] gh/coconutruben/29/base -> origin/gh/coconutruben/29/base 2025-09-07T07:34:58.1958601Z * [new branch] gh/coconutruben/29/head -> origin/gh/coconutruben/29/head 2025-09-07T07:34:58.1958908Z * [new branch] gh/coconutruben/29/orig -> origin/gh/coconutruben/29/orig 2025-09-07T07:34:58.1959098Z * [new branch] gh/coconutruben/30/base -> origin/gh/coconutruben/30/base 2025-09-07T07:34:58.1959401Z * [new branch] gh/coconutruben/30/head -> origin/gh/coconutruben/30/head 2025-09-07T07:34:58.1960095Z * [new branch] gh/coconutruben/30/orig -> origin/gh/coconutruben/30/orig 2025-09-07T07:34:58.1960280Z * [new branch] gh/coconutruben/31/base -> origin/gh/coconutruben/31/base 2025-09-07T07:34:58.1960614Z * [new branch] gh/coconutruben/31/head -> origin/gh/coconutruben/31/head 2025-09-07T07:34:58.1960819Z * [new branch] gh/coconutruben/31/orig -> origin/gh/coconutruben/31/orig 2025-09-07T07:34:58.1962196Z * [new branch] gh/coconutruben/32/base -> origin/gh/coconutruben/32/base 2025-09-07T07:34:58.1962361Z * [new branch] gh/coconutruben/32/head -> origin/gh/coconutruben/32/head 2025-09-07T07:34:58.1962541Z * [new branch] gh/coconutruben/32/orig -> origin/gh/coconutruben/32/orig 2025-09-07T07:34:58.1962692Z * [new branch] gh/coconutruben/33/base -> origin/gh/coconutruben/33/base 2025-09-07T07:34:58.1962862Z * [new branch] gh/coconutruben/33/head -> origin/gh/coconutruben/33/head 2025-09-07T07:34:58.1963069Z * [new branch] gh/coconutruben/33/orig -> origin/gh/coconutruben/33/orig 2025-09-07T07:34:58.1963380Z * [new branch] gh/coconutruben/34/base -> origin/gh/coconutruben/34/base 2025-09-07T07:34:58.1963580Z * [new branch] gh/coconutruben/34/head -> origin/gh/coconutruben/34/head 2025-09-07T07:34:58.1965691Z * [new branch] gh/coconutruben/34/orig -> origin/gh/coconutruben/34/orig 2025-09-07T07:34:58.1965903Z * [new branch] gh/coconutruben/35/base -> origin/gh/coconutruben/35/base 2025-09-07T07:34:58.1966075Z * [new branch] gh/coconutruben/35/head -> origin/gh/coconutruben/35/head 2025-09-07T07:34:58.1967260Z * [new branch] gh/coconutruben/35/orig -> origin/gh/coconutruben/35/orig 2025-09-07T07:34:58.1969958Z * [new branch] gh/coconutruben/36/base -> origin/gh/coconutruben/36/base 2025-09-07T07:34:58.1970196Z * [new branch] gh/coconutruben/36/head -> origin/gh/coconutruben/36/head 2025-09-07T07:34:58.1972275Z * [new branch] gh/coconutruben/36/orig -> origin/gh/coconutruben/36/orig 2025-09-07T07:34:58.1972875Z * [new branch] gh/coconutruben/37/base -> origin/gh/coconutruben/37/base 2025-09-07T07:34:58.1973557Z * [new branch] gh/coconutruben/37/head -> origin/gh/coconutruben/37/head 2025-09-07T07:34:58.1974514Z * [new branch] gh/coconutruben/37/orig -> origin/gh/coconutruben/37/orig 2025-09-07T07:34:58.1975829Z * [new branch] gh/coconutruben/38/base -> origin/gh/coconutruben/38/base 2025-09-07T07:34:58.1976030Z * [new branch] gh/coconutruben/38/head -> origin/gh/coconutruben/38/head 2025-09-07T07:34:58.1978881Z * [new branch] gh/coconutruben/38/orig -> origin/gh/coconutruben/38/orig 2025-09-07T07:34:58.1979081Z * [new branch] gh/coconutruben/39/base -> origin/gh/coconutruben/39/base 2025-09-07T07:34:58.1979240Z * [new branch] gh/coconutruben/39/head -> origin/gh/coconutruben/39/head 2025-09-07T07:34:58.1979409Z * [new branch] gh/coconutruben/39/orig -> origin/gh/coconutruben/39/orig 2025-09-07T07:34:58.1984474Z * [new branch] gh/coconutruben/40/base -> origin/gh/coconutruben/40/base 2025-09-07T07:34:58.1984714Z * [new branch] gh/coconutruben/40/head -> origin/gh/coconutruben/40/head 2025-09-07T07:34:58.1984892Z * [new branch] gh/coconutruben/40/orig -> origin/gh/coconutruben/40/orig 2025-09-07T07:34:58.1985058Z * [new branch] gh/coconutruben/41/base -> origin/gh/coconutruben/41/base 2025-09-07T07:34:58.1985260Z * [new branch] gh/coconutruben/41/head -> origin/gh/coconutruben/41/head 2025-09-07T07:34:58.1985430Z * [new branch] gh/coconutruben/41/orig -> origin/gh/coconutruben/41/orig 2025-09-07T07:34:58.1985637Z * [new branch] gh/coconutruben/42/base -> origin/gh/coconutruben/42/base 2025-09-07T07:34:58.1986448Z * [new branch] gh/coconutruben/42/head -> origin/gh/coconutruben/42/head 2025-09-07T07:34:58.1986916Z * [new branch] gh/coconutruben/42/orig -> origin/gh/coconutruben/42/orig 2025-09-07T07:34:58.1989412Z * [new branch] gh/coconutruben/43/base -> origin/gh/coconutruben/43/base 2025-09-07T07:34:58.1995624Z * [new branch] gh/coconutruben/43/head -> origin/gh/coconutruben/43/head 2025-09-07T07:34:58.1995843Z * [new branch] gh/coconutruben/43/orig -> origin/gh/coconutruben/43/orig 2025-09-07T07:34:58.1996068Z * [new branch] gh/coconutruben/44/base -> origin/gh/coconutruben/44/base 2025-09-07T07:34:58.1996236Z * [new branch] gh/coconutruben/44/head -> origin/gh/coconutruben/44/head 2025-09-07T07:34:58.1996404Z * [new branch] gh/coconutruben/44/orig -> origin/gh/coconutruben/44/orig 2025-09-07T07:34:58.1996566Z * [new branch] gh/coconutruben/45/base -> origin/gh/coconutruben/45/base 2025-09-07T07:34:58.2001035Z * [new branch] gh/coconutruben/45/head -> origin/gh/coconutruben/45/head 2025-09-07T07:34:58.2001257Z * [new branch] gh/coconutruben/45/orig -> origin/gh/coconutruben/45/orig 2025-09-07T07:34:58.2001421Z * [new branch] gh/coconutruben/46/base -> origin/gh/coconutruben/46/base 2025-09-07T07:34:58.2001593Z * [new branch] gh/coconutruben/46/head -> origin/gh/coconutruben/46/head 2025-09-07T07:34:58.2001748Z * [new branch] gh/coconutruben/46/orig -> origin/gh/coconutruben/46/orig 2025-09-07T07:34:58.2002059Z * [new branch] gh/coconutruben/47/base -> origin/gh/coconutruben/47/base 2025-09-07T07:34:58.2002215Z * [new branch] gh/coconutruben/47/head -> origin/gh/coconutruben/47/head 2025-09-07T07:34:58.2002380Z * [new branch] gh/coconutruben/47/orig -> origin/gh/coconutruben/47/orig 2025-09-07T07:34:58.2002543Z * [new branch] gh/coconutruben/48/base -> origin/gh/coconutruben/48/base 2025-09-07T07:34:58.2003115Z * [new branch] gh/coconutruben/48/head -> origin/gh/coconutruben/48/head 2025-09-07T07:34:58.2003329Z * [new branch] gh/coconutruben/48/orig -> origin/gh/coconutruben/48/orig 2025-09-07T07:34:58.2003488Z * [new branch] gh/coconutruben/49/base -> origin/gh/coconutruben/49/base 2025-09-07T07:34:58.2003641Z * [new branch] gh/coconutruben/49/head -> origin/gh/coconutruben/49/head 2025-09-07T07:34:58.2003851Z * [new branch] gh/coconutruben/49/orig -> origin/gh/coconutruben/49/orig 2025-09-07T07:34:58.2005451Z * [new branch] gh/coconutruben/50/base -> origin/gh/coconutruben/50/base 2025-09-07T07:34:58.2005712Z * [new branch] gh/coconutruben/50/head -> origin/gh/coconutruben/50/head 2025-09-07T07:34:58.2006451Z * [new branch] gh/coconutruben/50/orig -> origin/gh/coconutruben/50/orig 2025-09-07T07:34:58.2013009Z * [new branch] gh/coconutruben/51/base -> origin/gh/coconutruben/51/base 2025-09-07T07:34:58.2013216Z * [new branch] gh/coconutruben/51/head -> origin/gh/coconutruben/51/head 2025-09-07T07:34:58.2013467Z * [new branch] gh/coconutruben/51/orig -> origin/gh/coconutruben/51/orig 2025-09-07T07:34:58.2013643Z * [new branch] gh/coconutruben/52/base -> origin/gh/coconutruben/52/base 2025-09-07T07:34:58.2013804Z * [new branch] gh/coconutruben/52/head -> origin/gh/coconutruben/52/head 2025-09-07T07:34:58.2014054Z * [new branch] gh/coconutruben/52/orig -> origin/gh/coconutruben/52/orig 2025-09-07T07:34:58.2017929Z * [new branch] gh/coconutruben/53/base -> origin/gh/coconutruben/53/base 2025-09-07T07:34:58.2018552Z * [new branch] gh/coconutruben/53/head -> origin/gh/coconutruben/53/head 2025-09-07T07:34:58.2018741Z * [new branch] gh/coconutruben/53/orig -> origin/gh/coconutruben/53/orig 2025-09-07T07:34:58.2018903Z * [new branch] gh/coconutruben/54/base -> origin/gh/coconutruben/54/base 2025-09-07T07:34:58.2019261Z * [new branch] gh/coconutruben/54/head -> origin/gh/coconutruben/54/head 2025-09-07T07:34:58.2019420Z * [new branch] gh/coconutruben/54/orig -> origin/gh/coconutruben/54/orig 2025-09-07T07:34:58.2019567Z * [new branch] gh/coconutruben/55/base -> origin/gh/coconutruben/55/base 2025-09-07T07:34:58.2019713Z * [new branch] gh/coconutruben/55/head -> origin/gh/coconutruben/55/head 2025-09-07T07:34:58.2025324Z * [new branch] gh/coconutruben/55/orig -> origin/gh/coconutruben/55/orig 2025-09-07T07:34:58.2029336Z * [new branch] gh/coconutruben/56/base -> origin/gh/coconutruben/56/base 2025-09-07T07:34:58.2029524Z * [new branch] gh/coconutruben/56/head -> origin/gh/coconutruben/56/head 2025-09-07T07:34:58.2029697Z * [new branch] gh/coconutruben/56/orig -> origin/gh/coconutruben/56/orig 2025-09-07T07:34:58.2029865Z * [new branch] gh/coconutruben/57/base -> origin/gh/coconutruben/57/base 2025-09-07T07:34:58.2030050Z * [new branch] gh/coconutruben/57/head -> origin/gh/coconutruben/57/head 2025-09-07T07:34:58.2030225Z * [new branch] gh/coconutruben/57/orig -> origin/gh/coconutruben/57/orig 2025-09-07T07:34:58.2030395Z * [new branch] gh/coconutruben/58/base -> origin/gh/coconutruben/58/base 2025-09-07T07:34:58.2030759Z * [new branch] gh/coconutruben/58/head -> origin/gh/coconutruben/58/head 2025-09-07T07:34:58.2030949Z * [new branch] gh/coconutruben/58/orig -> origin/gh/coconutruben/58/orig 2025-09-07T07:34:58.2031117Z * [new branch] gh/coconutruben/59/base -> origin/gh/coconutruben/59/base 2025-09-07T07:34:58.2031275Z * [new branch] gh/coconutruben/59/head -> origin/gh/coconutruben/59/head 2025-09-07T07:34:58.2031438Z * [new branch] gh/coconutruben/59/orig -> origin/gh/coconutruben/59/orig 2025-09-07T07:34:58.2031596Z * [new branch] gh/coconutruben/60/base -> origin/gh/coconutruben/60/base 2025-09-07T07:34:58.2031742Z * [new branch] gh/coconutruben/60/head -> origin/gh/coconutruben/60/head 2025-09-07T07:34:58.2031916Z * [new branch] gh/coconutruben/60/orig -> origin/gh/coconutruben/60/orig 2025-09-07T07:34:58.2033763Z * [new branch] gh/coconutruben/61/base -> origin/gh/coconutruben/61/base 2025-09-07T07:34:58.2033929Z * [new branch] gh/coconutruben/61/head -> origin/gh/coconutruben/61/head 2025-09-07T07:34:58.2034084Z * [new branch] gh/coconutruben/61/orig -> origin/gh/coconutruben/61/orig 2025-09-07T07:34:58.2034245Z * [new branch] gh/coconutruben/62/base -> origin/gh/coconutruben/62/base 2025-09-07T07:34:58.2034412Z * [new branch] gh/coconutruben/62/head -> origin/gh/coconutruben/62/head 2025-09-07T07:34:58.2038022Z * [new branch] gh/coconutruben/62/orig -> origin/gh/coconutruben/62/orig 2025-09-07T07:34:58.2038185Z * [new branch] gh/coconutruben/63/base -> origin/gh/coconutruben/63/base 2025-09-07T07:34:58.2038637Z * [new branch] gh/coconutruben/63/head -> origin/gh/coconutruben/63/head 2025-09-07T07:34:58.2038802Z * [new branch] gh/coconutruben/63/orig -> origin/gh/coconutruben/63/orig 2025-09-07T07:34:58.2038971Z * [new branch] gh/coconutruben/64/base -> origin/gh/coconutruben/64/base 2025-09-07T07:34:58.2041304Z * [new branch] gh/coconutruben/64/head -> origin/gh/coconutruben/64/head 2025-09-07T07:34:58.2041470Z * [new branch] gh/coconutruben/64/orig -> origin/gh/coconutruben/64/orig 2025-09-07T07:34:58.2041637Z * [new branch] gh/coconutruben/65/base -> origin/gh/coconutruben/65/base 2025-09-07T07:34:58.2041799Z * [new branch] gh/coconutruben/65/head -> origin/gh/coconutruben/65/head 2025-09-07T07:34:58.2042353Z * [new branch] gh/coconutruben/65/orig -> origin/gh/coconutruben/65/orig 2025-09-07T07:34:58.2043556Z * [new branch] gh/coconutruben/66/base -> origin/gh/coconutruben/66/base 2025-09-07T07:34:58.2043886Z * [new branch] gh/coconutruben/66/head -> origin/gh/coconutruben/66/head 2025-09-07T07:34:58.2044989Z * [new branch] gh/coconutruben/66/orig -> origin/gh/coconutruben/66/orig 2025-09-07T07:34:58.2046853Z * [new branch] gh/codingwithsurya/12/base -> origin/gh/codingwithsurya/12/base 2025-09-07T07:34:58.2047189Z * [new branch] gh/codingwithsurya/12/head -> origin/gh/codingwithsurya/12/head 2025-09-07T07:34:58.2048250Z * [new branch] gh/codingwithsurya/12/orig -> origin/gh/codingwithsurya/12/orig 2025-09-07T07:34:58.2049127Z * [new branch] gh/codingwithsurya/14/base -> origin/gh/codingwithsurya/14/base 2025-09-07T07:34:58.2049605Z * [new branch] gh/codingwithsurya/14/head -> origin/gh/codingwithsurya/14/head 2025-09-07T07:34:58.2050586Z * [new branch] gh/codingwithsurya/14/orig -> origin/gh/codingwithsurya/14/orig 2025-09-07T07:34:58.2051818Z * [new branch] gh/codingwithsurya/15/base -> origin/gh/codingwithsurya/15/base 2025-09-07T07:34:58.2052365Z * [new branch] gh/codingwithsurya/15/head -> origin/gh/codingwithsurya/15/head 2025-09-07T07:34:58.2055608Z * [new branch] gh/codingwithsurya/15/orig -> origin/gh/codingwithsurya/15/orig 2025-09-07T07:34:58.2055769Z * [new branch] gh/codingwithsurya/16/base -> origin/gh/codingwithsurya/16/base 2025-09-07T07:34:58.2055940Z * [new branch] gh/codingwithsurya/16/head -> origin/gh/codingwithsurya/16/head 2025-09-07T07:34:58.2056086Z * [new branch] gh/codingwithsurya/16/orig -> origin/gh/codingwithsurya/16/orig 2025-09-07T07:34:58.2062413Z * [new branch] gh/codingwithsurya/17/base -> origin/gh/codingwithsurya/17/base 2025-09-07T07:34:58.2062663Z * [new branch] gh/codingwithsurya/17/head -> origin/gh/codingwithsurya/17/head 2025-09-07T07:34:58.2062853Z * [new branch] gh/codingwithsurya/17/orig -> origin/gh/codingwithsurya/17/orig 2025-09-07T07:34:58.2063030Z * [new branch] gh/codingwithsurya/18/base -> origin/gh/codingwithsurya/18/base 2025-09-07T07:34:58.2063236Z * [new branch] gh/codingwithsurya/18/head -> origin/gh/codingwithsurya/18/head 2025-09-07T07:34:58.2063407Z * [new branch] gh/codingwithsurya/18/orig -> origin/gh/codingwithsurya/18/orig 2025-09-07T07:34:58.2063586Z * [new branch] gh/codingwithsurya/19/base -> origin/gh/codingwithsurya/19/base 2025-09-07T07:34:58.2063760Z * [new branch] gh/codingwithsurya/19/head -> origin/gh/codingwithsurya/19/head 2025-09-07T07:34:58.2063936Z * [new branch] gh/codingwithsurya/19/orig -> origin/gh/codingwithsurya/19/orig 2025-09-07T07:34:58.2064123Z * [new branch] gh/codingwithsurya/20/base -> origin/gh/codingwithsurya/20/base 2025-09-07T07:34:58.2064525Z * [new branch] gh/codingwithsurya/20/head -> origin/gh/codingwithsurya/20/head 2025-09-07T07:34:58.2065393Z * [new branch] gh/codingwithsurya/20/orig -> origin/gh/codingwithsurya/20/orig 2025-09-07T07:34:58.2066365Z * [new branch] gh/codingwithsurya/21/base -> origin/gh/codingwithsurya/21/base 2025-09-07T07:34:58.2066866Z * [new branch] gh/codingwithsurya/21/head -> origin/gh/codingwithsurya/21/head 2025-09-07T07:34:58.2069691Z * [new branch] gh/codingwithsurya/21/orig -> origin/gh/codingwithsurya/21/orig 2025-09-07T07:34:58.2069888Z * [new branch] gh/colinchan15/1/base -> origin/gh/colinchan15/1/base 2025-09-07T07:34:58.2070164Z * [new branch] gh/colinchan15/1/head -> origin/gh/colinchan15/1/head 2025-09-07T07:34:58.2070318Z * [new branch] gh/colinchan15/2/base -> origin/gh/colinchan15/2/base 2025-09-07T07:34:58.2070577Z * [new branch] gh/colinchan15/2/head -> origin/gh/colinchan15/2/head 2025-09-07T07:34:58.2071811Z * [new branch] gh/colinchan15/3/base -> origin/gh/colinchan15/3/base 2025-09-07T07:34:58.2071986Z * [new branch] gh/colinchan15/3/head -> origin/gh/colinchan15/3/head 2025-09-07T07:34:58.2075086Z * [new branch] gh/colinchan15/6/base -> origin/gh/colinchan15/6/base 2025-09-07T07:34:58.2075274Z * [new branch] gh/colinchan15/6/head -> origin/gh/colinchan15/6/head 2025-09-07T07:34:58.2075453Z * [new branch] gh/davidberard98/382/base -> origin/gh/davidberard98/382/base 2025-09-07T07:34:58.2075610Z * [new branch] gh/davidberard98/382/head -> origin/gh/davidberard98/382/head 2025-09-07T07:34:58.2076233Z * [new branch] gh/davidberard98/382/orig -> origin/gh/davidberard98/382/orig 2025-09-07T07:34:58.2077676Z * [new branch] gh/davidberard98/386/base -> origin/gh/davidberard98/386/base 2025-09-07T07:34:58.2077911Z * [new branch] gh/davidberard98/386/head -> origin/gh/davidberard98/386/head 2025-09-07T07:34:58.2078331Z * [new branch] gh/davidberard98/386/orig -> origin/gh/davidberard98/386/orig 2025-09-07T07:34:58.2079506Z * [new branch] gh/davidberard98/391/base -> origin/gh/davidberard98/391/base 2025-09-07T07:34:58.2079905Z * [new branch] gh/davidberard98/391/head -> origin/gh/davidberard98/391/head 2025-09-07T07:34:58.2082346Z * [new branch] gh/davidberard98/391/orig -> origin/gh/davidberard98/391/orig 2025-09-07T07:34:58.2082556Z * [new branch] gh/davidberard98/392/base -> origin/gh/davidberard98/392/base 2025-09-07T07:34:58.2082729Z * [new branch] gh/davidberard98/392/head -> origin/gh/davidberard98/392/head 2025-09-07T07:34:58.2083015Z * [new branch] gh/davidberard98/392/orig -> origin/gh/davidberard98/392/orig 2025-09-07T07:34:58.2084139Z * [new branch] gh/davidberard98/394/base -> origin/gh/davidberard98/394/base 2025-09-07T07:34:58.2084617Z * [new branch] gh/davidberard98/394/head -> origin/gh/davidberard98/394/head 2025-09-07T07:34:58.2085600Z * [new branch] gh/davidberard98/394/orig -> origin/gh/davidberard98/394/orig 2025-09-07T07:34:58.2087031Z * [new branch] gh/davidberard98/396/base -> origin/gh/davidberard98/396/base 2025-09-07T07:34:58.2087259Z * [new branch] gh/davidberard98/396/head -> origin/gh/davidberard98/396/head 2025-09-07T07:34:58.2088774Z * [new branch] gh/davidberard98/396/orig -> origin/gh/davidberard98/396/orig 2025-09-07T07:34:58.2089012Z * [new branch] gh/davidberard98/397/base -> origin/gh/davidberard98/397/base 2025-09-07T07:34:58.2097833Z * [new branch] gh/davidberard98/397/head -> origin/gh/davidberard98/397/head 2025-09-07T07:34:58.2098050Z * [new branch] gh/davidberard98/397/orig -> origin/gh/davidberard98/397/orig 2025-09-07T07:34:58.2098210Z * [new branch] gh/davidberard98/398/base -> origin/gh/davidberard98/398/base 2025-09-07T07:34:58.2098378Z * [new branch] gh/davidberard98/398/head -> origin/gh/davidberard98/398/head 2025-09-07T07:34:58.2098578Z * [new branch] gh/davidberard98/398/orig -> origin/gh/davidberard98/398/orig 2025-09-07T07:34:58.2098758Z * [new branch] gh/davidberard98/399/base -> origin/gh/davidberard98/399/base 2025-09-07T07:34:58.2098927Z * [new branch] gh/davidberard98/399/head -> origin/gh/davidberard98/399/head 2025-09-07T07:34:58.2099099Z * [new branch] gh/davidberard98/399/orig -> origin/gh/davidberard98/399/orig 2025-09-07T07:34:58.2099424Z * [new branch] gh/davidberard98/400/base -> origin/gh/davidberard98/400/base 2025-09-07T07:34:58.2099906Z * [new branch] gh/davidberard98/400/head -> origin/gh/davidberard98/400/head 2025-09-07T07:34:58.2100059Z * [new branch] gh/davidberard98/400/orig -> origin/gh/davidberard98/400/orig 2025-09-07T07:34:58.2100217Z * [new branch] gh/davidberard98/401/base -> origin/gh/davidberard98/401/base 2025-09-07T07:34:58.2100363Z * [new branch] gh/davidberard98/401/head -> origin/gh/davidberard98/401/head 2025-09-07T07:34:58.2100528Z * [new branch] gh/davidberard98/401/orig -> origin/gh/davidberard98/401/orig 2025-09-07T07:34:58.2100676Z * [new branch] gh/davidberard98/402/base -> origin/gh/davidberard98/402/base 2025-09-07T07:34:58.2103949Z * [new branch] gh/davidberard98/402/head -> origin/gh/davidberard98/402/head 2025-09-07T07:34:58.2104264Z * [new branch] gh/davidberard98/402/orig -> origin/gh/davidberard98/402/orig 2025-09-07T07:34:58.2104437Z * [new branch] gh/davidberard98/403/base -> origin/gh/davidberard98/403/base 2025-09-07T07:34:58.2104613Z * [new branch] gh/davidberard98/403/head -> origin/gh/davidberard98/403/head 2025-09-07T07:34:58.2104858Z * [new branch] gh/davidberard98/403/orig -> origin/gh/davidberard98/403/orig 2025-09-07T07:34:58.2105061Z * [new branch] gh/davidberard98/404/base -> origin/gh/davidberard98/404/base 2025-09-07T07:34:58.2105360Z * [new branch] gh/davidberard98/404/head -> origin/gh/davidberard98/404/head 2025-09-07T07:34:58.2109257Z * [new branch] gh/davidberard98/404/orig -> origin/gh/davidberard98/404/orig 2025-09-07T07:34:58.2109406Z * [new branch] gh/davidberard98/405/base -> origin/gh/davidberard98/405/base 2025-09-07T07:34:58.2109562Z * [new branch] gh/davidberard98/405/head -> origin/gh/davidberard98/405/head 2025-09-07T07:34:58.2109709Z * [new branch] gh/davidberard98/405/orig -> origin/gh/davidberard98/405/orig 2025-09-07T07:34:58.2109867Z * [new branch] gh/davidberard98/406/base -> origin/gh/davidberard98/406/base 2025-09-07T07:34:58.2110140Z * [new branch] gh/davidberard98/406/head -> origin/gh/davidberard98/406/head 2025-09-07T07:34:58.2110311Z * [new branch] gh/davidberard98/406/orig -> origin/gh/davidberard98/406/orig 2025-09-07T07:34:58.2116767Z * [new branch] gh/davidberard98/407/base -> origin/gh/davidberard98/407/base 2025-09-07T07:34:58.2116993Z * [new branch] gh/davidberard98/407/head -> origin/gh/davidberard98/407/head 2025-09-07T07:34:58.2117150Z * [new branch] gh/davidberard98/407/orig -> origin/gh/davidberard98/407/orig 2025-09-07T07:34:58.2117297Z * [new branch] gh/davidberard98/408/base -> origin/gh/davidberard98/408/base 2025-09-07T07:34:58.2117462Z * [new branch] gh/davidberard98/408/head -> origin/gh/davidberard98/408/head 2025-09-07T07:34:58.2117616Z * [new branch] gh/davidberard98/408/orig -> origin/gh/davidberard98/408/orig 2025-09-07T07:34:58.2117803Z * [new branch] gh/davidberard98/409/base -> origin/gh/davidberard98/409/base 2025-09-07T07:34:58.2117954Z * [new branch] gh/davidberard98/409/head -> origin/gh/davidberard98/409/head 2025-09-07T07:34:58.2118166Z * [new branch] gh/davidberard98/409/orig -> origin/gh/davidberard98/409/orig 2025-09-07T07:34:58.2120425Z * [new branch] gh/desertfire/594/base -> origin/gh/desertfire/594/base 2025-09-07T07:34:58.2120724Z * [new branch] gh/desertfire/594/head -> origin/gh/desertfire/594/head 2025-09-07T07:34:58.2120897Z * [new branch] gh/desertfire/594/orig -> origin/gh/desertfire/594/orig 2025-09-07T07:34:58.2121055Z * [new branch] gh/desertfire/595/base -> origin/gh/desertfire/595/base 2025-09-07T07:34:58.2121203Z * [new branch] gh/desertfire/595/head -> origin/gh/desertfire/595/head 2025-09-07T07:34:58.2121421Z * [new branch] gh/desertfire/595/orig -> origin/gh/desertfire/595/orig 2025-09-07T07:34:58.2121642Z * [new branch] gh/desertfire/597/base -> origin/gh/desertfire/597/base 2025-09-07T07:34:58.2123699Z * [new branch] gh/desertfire/597/head -> origin/gh/desertfire/597/head 2025-09-07T07:34:58.2123902Z * [new branch] gh/desertfire/597/orig -> origin/gh/desertfire/597/orig 2025-09-07T07:34:58.2124461Z * [new branch] gh/dharakk/1/base -> origin/gh/dharakk/1/base 2025-09-07T07:34:58.2125018Z * [new branch] gh/dharakk/1/head -> origin/gh/dharakk/1/head 2025-09-07T07:34:58.2128696Z * [new branch] gh/drisspg/149/base -> origin/gh/drisspg/149/base 2025-09-07T07:34:58.2128895Z * [new branch] gh/drisspg/149/head -> origin/gh/drisspg/149/head 2025-09-07T07:34:58.2129047Z * [new branch] gh/drisspg/149/orig -> origin/gh/drisspg/149/orig 2025-09-07T07:34:58.2129229Z * [new branch] gh/drisspg/159/base -> origin/gh/drisspg/159/base 2025-09-07T07:34:58.2129545Z * [new branch] gh/drisspg/159/head -> origin/gh/drisspg/159/head 2025-09-07T07:34:58.2129932Z * [new branch] gh/drisspg/159/orig -> origin/gh/drisspg/159/orig 2025-09-07T07:34:58.2132122Z * [new branch] gh/drisspg/166/base -> origin/gh/drisspg/166/base 2025-09-07T07:34:58.2132471Z * [new branch] gh/drisspg/166/head -> origin/gh/drisspg/166/head 2025-09-07T07:34:58.2132741Z * [new branch] gh/drisspg/166/orig -> origin/gh/drisspg/166/orig 2025-09-07T07:34:58.2134414Z * [new branch] gh/drisspg/170/base -> origin/gh/drisspg/170/base 2025-09-07T07:34:58.2135046Z * [new branch] gh/drisspg/170/head -> origin/gh/drisspg/170/head 2025-09-07T07:34:58.2135535Z * [new branch] gh/drisspg/170/orig -> origin/gh/drisspg/170/orig 2025-09-07T07:34:58.2136177Z * [new branch] gh/drisspg/173/base -> origin/gh/drisspg/173/base 2025-09-07T07:34:58.2136345Z * [new branch] gh/drisspg/173/head -> origin/gh/drisspg/173/head 2025-09-07T07:34:58.2140002Z * [new branch] gh/drisspg/173/orig -> origin/gh/drisspg/173/orig 2025-09-07T07:34:58.2140207Z * [new branch] gh/drisspg/177/base -> origin/gh/drisspg/177/base 2025-09-07T07:34:58.2140375Z * [new branch] gh/drisspg/177/head -> origin/gh/drisspg/177/head 2025-09-07T07:34:58.2140521Z * [new branch] gh/drisspg/177/orig -> origin/gh/drisspg/177/orig 2025-09-07T07:34:58.2140677Z * [new branch] gh/drisspg/178/base -> origin/gh/drisspg/178/base 2025-09-07T07:34:58.2140819Z * [new branch] gh/drisspg/178/head -> origin/gh/drisspg/178/head 2025-09-07T07:34:58.2141206Z * [new branch] gh/drisspg/178/orig -> origin/gh/drisspg/178/orig 2025-09-07T07:34:58.2142664Z * [new branch] gh/drisspg/180/base -> origin/gh/drisspg/180/base 2025-09-07T07:34:58.2142991Z * [new branch] gh/drisspg/180/head -> origin/gh/drisspg/180/head 2025-09-07T07:34:58.2143160Z * [new branch] gh/drisspg/180/orig -> origin/gh/drisspg/180/orig 2025-09-07T07:34:58.2146736Z * [new branch] gh/drisspg/181/base -> origin/gh/drisspg/181/base 2025-09-07T07:34:58.2151481Z * [new branch] gh/drisspg/181/head -> origin/gh/drisspg/181/head 2025-09-07T07:34:58.2157166Z * [new branch] gh/drisspg/181/orig -> origin/gh/drisspg/181/orig 2025-09-07T07:34:58.2162153Z * [new branch] gh/drisspg/182/base -> origin/gh/drisspg/182/base 2025-09-07T07:34:58.2162321Z * [new branch] gh/drisspg/182/head -> origin/gh/drisspg/182/head 2025-09-07T07:34:58.2162682Z * [new branch] gh/drisspg/183/base -> origin/gh/drisspg/183/base 2025-09-07T07:34:58.2162835Z * [new branch] gh/drisspg/183/head -> origin/gh/drisspg/183/head 2025-09-07T07:34:58.2163015Z * [new branch] gh/drisspg/184/base -> origin/gh/drisspg/184/base 2025-09-07T07:34:58.2163163Z * [new branch] gh/drisspg/184/head -> origin/gh/drisspg/184/head 2025-09-07T07:34:58.2163326Z * [new branch] gh/drisspg/185/base -> origin/gh/drisspg/185/base 2025-09-07T07:34:58.2163466Z * [new branch] gh/drisspg/185/head -> origin/gh/drisspg/185/head 2025-09-07T07:34:58.2163608Z * [new branch] gh/drisspg/186/base -> origin/gh/drisspg/186/base 2025-09-07T07:34:58.2163765Z * [new branch] gh/drisspg/186/head -> origin/gh/drisspg/186/head 2025-09-07T07:34:58.2163914Z * [new branch] gh/drisspg/186/orig -> origin/gh/drisspg/186/orig 2025-09-07T07:34:58.2164065Z * [new branch] gh/drisspg/187/base -> origin/gh/drisspg/187/base 2025-09-07T07:34:58.2164217Z * [new branch] gh/drisspg/187/head -> origin/gh/drisspg/187/head 2025-09-07T07:34:58.2164369Z * [new branch] gh/drisspg/187/orig -> origin/gh/drisspg/187/orig 2025-09-07T07:34:58.2164517Z * [new branch] gh/drisspg/188/base -> origin/gh/drisspg/188/base 2025-09-07T07:34:58.2164714Z * [new branch] gh/drisspg/188/head -> origin/gh/drisspg/188/head 2025-09-07T07:34:58.2164855Z * [new branch] gh/drisspg/188/orig -> origin/gh/drisspg/188/orig 2025-09-07T07:34:58.2164988Z * [new branch] gh/drisspg/189/base -> origin/gh/drisspg/189/base 2025-09-07T07:34:58.2165127Z * [new branch] gh/drisspg/189/head -> origin/gh/drisspg/189/head 2025-09-07T07:34:58.2165259Z * [new branch] gh/drisspg/189/orig -> origin/gh/drisspg/189/orig 2025-09-07T07:34:58.2165399Z * [new branch] gh/drisspg/190/base -> origin/gh/drisspg/190/base 2025-09-07T07:34:58.2165530Z * [new branch] gh/drisspg/190/head -> origin/gh/drisspg/190/head 2025-09-07T07:34:58.2165660Z * [new branch] gh/drisspg/190/orig -> origin/gh/drisspg/190/orig 2025-09-07T07:34:58.2165820Z * [new branch] gh/drisspg/191/base -> origin/gh/drisspg/191/base 2025-09-07T07:34:58.2165972Z * [new branch] gh/drisspg/191/head -> origin/gh/drisspg/191/head 2025-09-07T07:34:58.2167368Z * [new branch] gh/drisspg/191/orig -> origin/gh/drisspg/191/orig 2025-09-07T07:34:58.2167531Z * [new branch] gh/drisspg/192/base -> origin/gh/drisspg/192/base 2025-09-07T07:34:58.2167724Z * [new branch] gh/drisspg/192/head -> origin/gh/drisspg/192/head 2025-09-07T07:34:58.2167919Z * [new branch] gh/drisspg/192/orig -> origin/gh/drisspg/192/orig 2025-09-07T07:34:58.2168075Z * [new branch] gh/drisspg/193/base -> origin/gh/drisspg/193/base 2025-09-07T07:34:58.2168293Z * [new branch] gh/drisspg/193/head -> origin/gh/drisspg/193/head 2025-09-07T07:34:58.2171013Z * [new branch] gh/drisspg/193/orig -> origin/gh/drisspg/193/orig 2025-09-07T07:34:58.2171369Z * [new branch] gh/drisspg/194/base -> origin/gh/drisspg/194/base 2025-09-07T07:34:58.2171618Z * [new branch] gh/drisspg/194/head -> origin/gh/drisspg/194/head 2025-09-07T07:34:58.2171785Z * [new branch] gh/drisspg/194/orig -> origin/gh/drisspg/194/orig 2025-09-07T07:34:58.2172061Z * [new branch] gh/drisspg/195/base -> origin/gh/drisspg/195/base 2025-09-07T07:34:58.2172803Z * [new branch] gh/drisspg/195/head -> origin/gh/drisspg/195/head 2025-09-07T07:34:58.2173483Z * [new branch] gh/drisspg/195/orig -> origin/gh/drisspg/195/orig 2025-09-07T07:34:58.2177697Z * [new branch] gh/drisspg/196/base -> origin/gh/drisspg/196/base 2025-09-07T07:34:58.2178038Z * [new branch] gh/drisspg/196/head -> origin/gh/drisspg/196/head 2025-09-07T07:34:58.2178235Z * [new branch] gh/drisspg/196/orig -> origin/gh/drisspg/196/orig 2025-09-07T07:34:58.2178503Z * [new branch] gh/drisspg/197/base -> origin/gh/drisspg/197/base 2025-09-07T07:34:58.2178695Z * [new branch] gh/drisspg/197/head -> origin/gh/drisspg/197/head 2025-09-07T07:34:58.2179430Z * [new branch] gh/drisspg/197/orig -> origin/gh/drisspg/197/orig 2025-09-07T07:34:58.2179659Z * [new branch] gh/drisspg/198/base -> origin/gh/drisspg/198/base 2025-09-07T07:34:58.2179821Z * [new branch] gh/drisspg/198/head -> origin/gh/drisspg/198/head 2025-09-07T07:34:58.2180322Z * [new branch] gh/drisspg/198/orig -> origin/gh/drisspg/198/orig 2025-09-07T07:34:58.2184311Z * [new branch] gh/drisspg/199/base -> origin/gh/drisspg/199/base 2025-09-07T07:34:58.2184497Z * [new branch] gh/drisspg/199/head -> origin/gh/drisspg/199/head 2025-09-07T07:34:58.2184637Z * [new branch] gh/drisspg/199/orig -> origin/gh/drisspg/199/orig 2025-09-07T07:34:58.2184945Z * [new branch] gh/dsjohns2/1/base -> origin/gh/dsjohns2/1/base 2025-09-07T07:34:58.2185102Z * [new branch] gh/dsjohns2/1/head -> origin/gh/dsjohns2/1/head 2025-09-07T07:34:58.2185520Z * [new branch] gh/eellison/784/base -> origin/gh/eellison/784/base 2025-09-07T07:34:58.2186852Z * [new branch] gh/eellison/784/head -> origin/gh/eellison/784/head 2025-09-07T07:34:58.2187034Z * [new branch] gh/eellison/784/orig -> origin/gh/eellison/784/orig 2025-09-07T07:34:58.2190958Z * [new branch] gh/eellison/785/base -> origin/gh/eellison/785/base 2025-09-07T07:34:58.2195144Z * [new branch] gh/eellison/785/head -> origin/gh/eellison/785/head 2025-09-07T07:34:58.2200656Z * [new branch] gh/eellison/785/orig -> origin/gh/eellison/785/orig 2025-09-07T07:34:58.2202525Z * [new branch] gh/eellison/789/base -> origin/gh/eellison/789/base 2025-09-07T07:34:58.2202694Z * [new branch] gh/eellison/789/head -> origin/gh/eellison/789/head 2025-09-07T07:34:58.2203004Z * [new branch] gh/eellison/789/orig -> origin/gh/eellison/789/orig 2025-09-07T07:34:58.2203144Z * [new branch] gh/eellison/800/base -> origin/gh/eellison/800/base 2025-09-07T07:34:58.2203282Z * [new branch] gh/eellison/800/head -> origin/gh/eellison/800/head 2025-09-07T07:34:58.2203413Z * [new branch] gh/eellison/800/orig -> origin/gh/eellison/800/orig 2025-09-07T07:34:58.2203571Z * [new branch] gh/eellison/801/base -> origin/gh/eellison/801/base 2025-09-07T07:34:58.2203702Z * [new branch] gh/eellison/801/head -> origin/gh/eellison/801/head 2025-09-07T07:34:58.2203839Z * [new branch] gh/eellison/801/orig -> origin/gh/eellison/801/orig 2025-09-07T07:34:58.2203969Z * [new branch] gh/eellison/802/base -> origin/gh/eellison/802/base 2025-09-07T07:34:58.2204119Z * [new branch] gh/eellison/802/head -> origin/gh/eellison/802/head 2025-09-07T07:34:58.2204252Z * [new branch] gh/eellison/802/orig -> origin/gh/eellison/802/orig 2025-09-07T07:34:58.2204379Z * [new branch] gh/eellison/805/base -> origin/gh/eellison/805/base 2025-09-07T07:34:58.2204511Z * [new branch] gh/eellison/805/head -> origin/gh/eellison/805/head 2025-09-07T07:34:58.2204638Z * [new branch] gh/eellison/805/orig -> origin/gh/eellison/805/orig 2025-09-07T07:34:58.2204857Z * [new branch] gh/eellison/808/base -> origin/gh/eellison/808/base 2025-09-07T07:34:58.2204992Z * [new branch] gh/eellison/808/head -> origin/gh/eellison/808/head 2025-09-07T07:34:58.2205126Z * [new branch] gh/eellison/808/orig -> origin/gh/eellison/808/orig 2025-09-07T07:34:58.2205275Z * [new branch] gh/eellison/809/base -> origin/gh/eellison/809/base 2025-09-07T07:34:58.2205407Z * [new branch] gh/eellison/809/head -> origin/gh/eellison/809/head 2025-09-07T07:34:58.2206635Z * [new branch] gh/eellison/809/orig -> origin/gh/eellison/809/orig 2025-09-07T07:34:58.2207021Z * [new branch] gh/eellison/813/base -> origin/gh/eellison/813/base 2025-09-07T07:34:58.2215463Z * [new branch] gh/eellison/813/head -> origin/gh/eellison/813/head 2025-09-07T07:34:58.2218170Z * [new branch] gh/eellison/813/orig -> origin/gh/eellison/813/orig 2025-09-07T07:34:58.2218480Z * [new branch] gh/eellison/814/base -> origin/gh/eellison/814/base 2025-09-07T07:34:58.2218833Z * [new branch] gh/eellison/814/head -> origin/gh/eellison/814/head 2025-09-07T07:34:58.2218969Z * [new branch] gh/eellison/814/orig -> origin/gh/eellison/814/orig 2025-09-07T07:34:58.2219484Z * [new branch] gh/eellison/815/base -> origin/gh/eellison/815/base 2025-09-07T07:34:58.2223921Z * [new branch] gh/eellison/815/head -> origin/gh/eellison/815/head 2025-09-07T07:34:58.2228831Z * [new branch] gh/eellison/815/orig -> origin/gh/eellison/815/orig 2025-09-07T07:34:58.2233217Z * [new branch] gh/eellison/816/base -> origin/gh/eellison/816/base 2025-09-07T07:34:58.2235225Z * [new branch] gh/eellison/816/head -> origin/gh/eellison/816/head 2025-09-07T07:34:58.2235401Z * [new branch] gh/eellison/816/orig -> origin/gh/eellison/816/orig 2025-09-07T07:34:58.2235548Z * [new branch] gh/eellison/817/base -> origin/gh/eellison/817/base 2025-09-07T07:34:58.2235679Z * [new branch] gh/eellison/817/head -> origin/gh/eellison/817/head 2025-09-07T07:34:58.2235821Z * [new branch] gh/eellison/817/orig -> origin/gh/eellison/817/orig 2025-09-07T07:34:58.2235973Z * [new branch] gh/eellison/818/base -> origin/gh/eellison/818/base 2025-09-07T07:34:58.2236104Z * [new branch] gh/eellison/818/head -> origin/gh/eellison/818/head 2025-09-07T07:34:58.2236241Z * [new branch] gh/eellison/818/orig -> origin/gh/eellison/818/orig 2025-09-07T07:34:58.2236383Z * [new branch] gh/eellison/819/base -> origin/gh/eellison/819/base 2025-09-07T07:34:58.2236521Z * [new branch] gh/eellison/819/head -> origin/gh/eellison/819/head 2025-09-07T07:34:58.2236657Z * [new branch] gh/eellison/819/orig -> origin/gh/eellison/819/orig 2025-09-07T07:34:58.2236795Z * [new branch] gh/eellison/820/base -> origin/gh/eellison/820/base 2025-09-07T07:34:58.2236928Z * [new branch] gh/eellison/820/head -> origin/gh/eellison/820/head 2025-09-07T07:34:58.2237059Z * [new branch] gh/eellison/820/orig -> origin/gh/eellison/820/orig 2025-09-07T07:34:58.2237203Z * [new branch] gh/eellison/821/base -> origin/gh/eellison/821/base 2025-09-07T07:34:58.2237335Z * [new branch] gh/eellison/821/head -> origin/gh/eellison/821/head 2025-09-07T07:34:58.2237469Z * [new branch] gh/eellison/821/orig -> origin/gh/eellison/821/orig 2025-09-07T07:34:58.2237602Z * [new branch] gh/eellison/822/base -> origin/gh/eellison/822/base 2025-09-07T07:34:58.2237735Z * [new branch] gh/eellison/822/head -> origin/gh/eellison/822/head 2025-09-07T07:34:58.2238041Z * [new branch] gh/eellison/822/orig -> origin/gh/eellison/822/orig 2025-09-07T07:34:58.2238186Z * [new branch] gh/eellison/823/base -> origin/gh/eellison/823/base 2025-09-07T07:34:58.2238324Z * [new branch] gh/eellison/823/head -> origin/gh/eellison/823/head 2025-09-07T07:34:58.2238459Z * [new branch] gh/eellison/823/orig -> origin/gh/eellison/823/orig 2025-09-07T07:34:58.2238599Z * [new branch] gh/etaf/132/base -> origin/gh/etaf/132/base 2025-09-07T07:34:58.2238726Z * [new branch] gh/etaf/132/head -> origin/gh/etaf/132/head 2025-09-07T07:34:58.2238849Z * [new branch] gh/etaf/132/orig -> origin/gh/etaf/132/orig 2025-09-07T07:34:58.2238977Z * [new branch] gh/etaf/138/base -> origin/gh/etaf/138/base 2025-09-07T07:34:58.2239098Z * [new branch] gh/etaf/138/head -> origin/gh/etaf/138/head 2025-09-07T07:34:58.2239228Z * [new branch] gh/etaf/138/orig -> origin/gh/etaf/138/orig 2025-09-07T07:34:58.2239347Z * [new branch] gh/etaf/140/base -> origin/gh/etaf/140/base 2025-09-07T07:34:58.2239476Z * [new branch] gh/etaf/140/head -> origin/gh/etaf/140/head 2025-09-07T07:34:58.2239659Z * [new branch] gh/etaf/140/orig -> origin/gh/etaf/140/orig 2025-09-07T07:34:58.2240712Z * [new branch] gh/etaf/143/base -> origin/gh/etaf/143/base 2025-09-07T07:34:58.2240857Z * [new branch] gh/etaf/143/head -> origin/gh/etaf/143/head 2025-09-07T07:34:58.2241033Z * [new branch] gh/etaf/143/orig -> origin/gh/etaf/143/orig 2025-09-07T07:34:58.2241190Z * [new branch] gh/etaf/147/base -> origin/gh/etaf/147/base 2025-09-07T07:34:58.2241390Z * [new branch] gh/etaf/147/head -> origin/gh/etaf/147/head 2025-09-07T07:34:58.2241747Z * [new branch] gh/etaf/151/base -> origin/gh/etaf/151/base 2025-09-07T07:34:58.2241873Z * [new branch] gh/etaf/151/head -> origin/gh/etaf/151/head 2025-09-07T07:34:58.2242086Z * [new branch] gh/etaf/151/orig -> origin/gh/etaf/151/orig 2025-09-07T07:34:58.2247071Z * [new branch] gh/etaf/152/base -> origin/gh/etaf/152/base 2025-09-07T07:34:58.2247290Z * [new branch] gh/etaf/152/head -> origin/gh/etaf/152/head 2025-09-07T07:34:58.2247437Z * [new branch] gh/etaf/152/orig -> origin/gh/etaf/152/orig 2025-09-07T07:34:58.2247577Z * [new branch] gh/etaf/153/base -> origin/gh/etaf/153/base 2025-09-07T07:34:58.2247729Z * [new branch] gh/etaf/153/head -> origin/gh/etaf/153/head 2025-09-07T07:34:58.2247871Z * [new branch] gh/etaf/153/orig -> origin/gh/etaf/153/orig 2025-09-07T07:34:58.2248075Z * [new branch] gh/etaf/154/base -> origin/gh/etaf/154/base 2025-09-07T07:34:58.2248650Z * [new branch] gh/etaf/154/head -> origin/gh/etaf/154/head 2025-09-07T07:34:58.2249462Z * [new branch] gh/etaf/154/orig -> origin/gh/etaf/154/orig 2025-09-07T07:34:58.2254518Z * [new branch] gh/etaf/155/base -> origin/gh/etaf/155/base 2025-09-07T07:34:58.2254706Z * [new branch] gh/etaf/155/head -> origin/gh/etaf/155/head 2025-09-07T07:34:58.2254833Z * [new branch] gh/etaf/155/orig -> origin/gh/etaf/155/orig 2025-09-07T07:34:58.2254953Z * [new branch] gh/etaf/156/base -> origin/gh/etaf/156/base 2025-09-07T07:34:58.2255081Z * [new branch] gh/etaf/156/head -> origin/gh/etaf/156/head 2025-09-07T07:34:58.2255201Z * [new branch] gh/etaf/156/orig -> origin/gh/etaf/156/orig 2025-09-07T07:34:58.2257654Z * [new branch] gh/etaf/157/base -> origin/gh/etaf/157/base 2025-09-07T07:34:58.2257973Z * [new branch] gh/etaf/157/head -> origin/gh/etaf/157/head 2025-09-07T07:34:58.2258547Z * [new branch] gh/etaf/157/orig -> origin/gh/etaf/157/orig 2025-09-07T07:34:58.2258718Z * [new branch] gh/etaf/158/base -> origin/gh/etaf/158/base 2025-09-07T07:34:58.2258919Z * [new branch] gh/etaf/158/head -> origin/gh/etaf/158/head 2025-09-07T07:34:58.2259077Z * [new branch] gh/etaf/158/orig -> origin/gh/etaf/158/orig 2025-09-07T07:34:58.2259299Z * [new branch] gh/etaf/159/base -> origin/gh/etaf/159/base 2025-09-07T07:34:58.2264826Z * [new branch] gh/etaf/159/head -> origin/gh/etaf/159/head 2025-09-07T07:34:58.2269722Z * [new branch] gh/etaf/159/orig -> origin/gh/etaf/159/orig 2025-09-07T07:34:58.2272149Z * [new branch] gh/etaf/160/base -> origin/gh/etaf/160/base 2025-09-07T07:34:58.2272406Z * [new branch] gh/etaf/160/head -> origin/gh/etaf/160/head 2025-09-07T07:34:58.2278686Z * [new branch] gh/etaf/160/orig -> origin/gh/etaf/160/orig 2025-09-07T07:34:58.2278847Z * [new branch] gh/etaf/161/base -> origin/gh/etaf/161/base 2025-09-07T07:34:58.2279183Z * [new branch] gh/etaf/161/head -> origin/gh/etaf/161/head 2025-09-07T07:34:58.2279321Z * [new branch] gh/etaf/161/orig -> origin/gh/etaf/161/orig 2025-09-07T07:34:58.2279447Z * [new branch] gh/etaf/162/base -> origin/gh/etaf/162/base 2025-09-07T07:34:58.2279581Z * [new branch] gh/etaf/162/head -> origin/gh/etaf/162/head 2025-09-07T07:34:58.2279707Z * [new branch] gh/etaf/162/orig -> origin/gh/etaf/162/orig 2025-09-07T07:34:58.2279842Z * [new branch] gh/etaf/163/base -> origin/gh/etaf/163/base 2025-09-07T07:34:58.2279964Z * [new branch] gh/etaf/163/head -> origin/gh/etaf/163/head 2025-09-07T07:34:58.2280088Z * [new branch] gh/etaf/163/orig -> origin/gh/etaf/163/orig 2025-09-07T07:34:58.2280220Z * [new branch] gh/etaf/164/base -> origin/gh/etaf/164/base 2025-09-07T07:34:58.2280348Z * [new branch] gh/etaf/164/head -> origin/gh/etaf/164/head 2025-09-07T07:34:58.2280476Z * [new branch] gh/etaf/164/orig -> origin/gh/etaf/164/orig 2025-09-07T07:34:58.2280597Z * [new branch] gh/etaf/165/base -> origin/gh/etaf/165/base 2025-09-07T07:34:58.2280725Z * [new branch] gh/etaf/165/orig -> origin/gh/etaf/165/orig 2025-09-07T07:34:58.2280848Z * [new branch] gh/etaf/166/base -> origin/gh/etaf/166/base 2025-09-07T07:34:58.2280975Z * [new branch] gh/etaf/166/head -> origin/gh/etaf/166/head 2025-09-07T07:34:58.2281105Z * [new branch] gh/etaf/166/orig -> origin/gh/etaf/166/orig 2025-09-07T07:34:58.2281227Z * [new branch] gh/etaf/167/base -> origin/gh/etaf/167/base 2025-09-07T07:34:58.2281357Z * [new branch] gh/etaf/167/head -> origin/gh/etaf/167/head 2025-09-07T07:34:58.2281479Z * [new branch] gh/etaf/167/orig -> origin/gh/etaf/167/orig 2025-09-07T07:34:58.2281601Z * [new branch] gh/etaf/168/base -> origin/gh/etaf/168/base 2025-09-07T07:34:58.2281730Z * [new branch] gh/etaf/168/head -> origin/gh/etaf/168/head 2025-09-07T07:34:58.2281855Z * [new branch] gh/etaf/168/orig -> origin/gh/etaf/168/orig 2025-09-07T07:34:58.2281982Z * [new branch] gh/etaf/169/base -> origin/gh/etaf/169/base 2025-09-07T07:34:58.2282158Z * [new branch] gh/etaf/169/head -> origin/gh/etaf/169/head 2025-09-07T07:34:58.2282533Z * [new branch] gh/etaf/169/orig -> origin/gh/etaf/169/orig 2025-09-07T07:34:58.2282986Z * [new branch] gh/exclamaforte/1/base -> origin/gh/exclamaforte/1/base 2025-09-07T07:34:58.2283459Z * [new branch] gh/exclamaforte/1/head -> origin/gh/exclamaforte/1/head 2025-09-07T07:34:58.2284628Z * [new branch] gh/exclamaforte/2/base -> origin/gh/exclamaforte/2/base 2025-09-07T07:34:58.2285035Z * [new branch] gh/exclamaforte/2/head -> origin/gh/exclamaforte/2/head 2025-09-07T07:34:58.2286066Z * [new branch] gh/exclamaforte/3/base -> origin/gh/exclamaforte/3/base 2025-09-07T07:34:58.2286374Z * [new branch] gh/exclamaforte/3/head -> origin/gh/exclamaforte/3/head 2025-09-07T07:34:58.2294139Z * [new branch] gh/exclamaforte/4/base -> origin/gh/exclamaforte/4/base 2025-09-07T07:34:58.2299143Z * [new branch] gh/exclamaforte/4/head -> origin/gh/exclamaforte/4/head 2025-09-07T07:34:58.2304662Z * [new branch] gh/ezyang/2374/base -> origin/gh/ezyang/2374/base 2025-09-07T07:34:58.2307074Z * [new branch] gh/ezyang/2374/head -> origin/gh/ezyang/2374/head 2025-09-07T07:34:58.2312381Z * [new branch] gh/ezyang/2374/orig -> origin/gh/ezyang/2374/orig 2025-09-07T07:34:58.2315688Z * [new branch] gh/ezyang/2973/base -> origin/gh/ezyang/2973/base 2025-09-07T07:34:58.2315874Z * [new branch] gh/ezyang/2973/head -> origin/gh/ezyang/2973/head 2025-09-07T07:34:58.2316029Z * [new branch] gh/ezyang/2973/orig -> origin/gh/ezyang/2973/orig 2025-09-07T07:34:58.2316176Z * [new branch] gh/ezyang/2974/base -> origin/gh/ezyang/2974/base 2025-09-07T07:34:58.2316310Z * [new branch] gh/ezyang/2974/head -> origin/gh/ezyang/2974/head 2025-09-07T07:34:58.2316466Z * [new branch] gh/ezyang/2974/orig -> origin/gh/ezyang/2974/orig 2025-09-07T07:34:58.2316598Z * [new branch] gh/ezyang/3074/base -> origin/gh/ezyang/3074/base 2025-09-07T07:34:58.2316728Z * [new branch] gh/ezyang/3074/head -> origin/gh/ezyang/3074/head 2025-09-07T07:34:58.2316894Z * [new branch] gh/ezyang/3074/orig -> origin/gh/ezyang/3074/orig 2025-09-07T07:34:58.2317026Z * [new branch] gh/ezyang/3088/base -> origin/gh/ezyang/3088/base 2025-09-07T07:34:58.2317161Z * [new branch] gh/ezyang/3088/head -> origin/gh/ezyang/3088/head 2025-09-07T07:34:58.2317291Z * [new branch] gh/ezyang/3088/orig -> origin/gh/ezyang/3088/orig 2025-09-07T07:34:58.2317420Z * [new branch] gh/ezyang/3092/base -> origin/gh/ezyang/3092/base 2025-09-07T07:34:58.2317555Z * [new branch] gh/ezyang/3092/head -> origin/gh/ezyang/3092/head 2025-09-07T07:34:58.2317689Z * [new branch] gh/ezyang/3092/orig -> origin/gh/ezyang/3092/orig 2025-09-07T07:34:58.2317827Z * [new branch] gh/ezyang/3103/base -> origin/gh/ezyang/3103/base 2025-09-07T07:34:58.2317957Z * [new branch] gh/ezyang/3103/head -> origin/gh/ezyang/3103/head 2025-09-07T07:34:58.2318094Z * [new branch] gh/ezyang/3103/orig -> origin/gh/ezyang/3103/orig 2025-09-07T07:34:58.2318225Z * [new branch] gh/ezyang/3105/base -> origin/gh/ezyang/3105/base 2025-09-07T07:34:58.2318352Z * [new branch] gh/ezyang/3105/head -> origin/gh/ezyang/3105/head 2025-09-07T07:34:58.2318489Z * [new branch] gh/ezyang/3105/orig -> origin/gh/ezyang/3105/orig 2025-09-07T07:34:58.2318627Z * [new branch] gh/ezyang/3114/base -> origin/gh/ezyang/3114/base 2025-09-07T07:34:58.2318911Z * [new branch] gh/ezyang/3114/head -> origin/gh/ezyang/3114/head 2025-09-07T07:34:58.2319043Z * [new branch] gh/ezyang/3114/orig -> origin/gh/ezyang/3114/orig 2025-09-07T07:34:58.2319181Z * [new branch] gh/ezyang/3116/base -> origin/gh/ezyang/3116/base 2025-09-07T07:34:58.2319312Z * [new branch] gh/ezyang/3116/head -> origin/gh/ezyang/3116/head 2025-09-07T07:34:58.2319455Z * [new branch] gh/ezyang/3116/orig -> origin/gh/ezyang/3116/orig 2025-09-07T07:34:58.2319599Z * [new branch] gh/ezyang/3120/base -> origin/gh/ezyang/3120/base 2025-09-07T07:34:58.2319729Z * [new branch] gh/ezyang/3120/head -> origin/gh/ezyang/3120/head 2025-09-07T07:34:58.2319889Z * [new branch] gh/ezyang/3120/orig -> origin/gh/ezyang/3120/orig 2025-09-07T07:34:58.2320020Z * [new branch] gh/ezyang/3122/base -> origin/gh/ezyang/3122/base 2025-09-07T07:34:58.2320162Z * [new branch] gh/ezyang/3122/head -> origin/gh/ezyang/3122/head 2025-09-07T07:34:58.2320293Z * [new branch] gh/ezyang/3122/orig -> origin/gh/ezyang/3122/orig 2025-09-07T07:34:58.2320425Z * [new branch] gh/ezyang/3123/base -> origin/gh/ezyang/3123/base 2025-09-07T07:34:58.2320566Z * [new branch] gh/ezyang/3123/head -> origin/gh/ezyang/3123/head 2025-09-07T07:34:58.2320750Z * [new branch] gh/ezyang/3123/orig -> origin/gh/ezyang/3123/orig 2025-09-07T07:34:58.2320892Z * [new branch] gh/ezyang/3125/base -> origin/gh/ezyang/3125/base 2025-09-07T07:34:58.2321021Z * [new branch] gh/ezyang/3125/head -> origin/gh/ezyang/3125/head 2025-09-07T07:34:58.2321150Z * [new branch] gh/ezyang/3125/orig -> origin/gh/ezyang/3125/orig 2025-09-07T07:34:58.2321285Z * [new branch] gh/ezyang/3126/base -> origin/gh/ezyang/3126/base 2025-09-07T07:34:58.2321419Z * [new branch] gh/ezyang/3126/head -> origin/gh/ezyang/3126/head 2025-09-07T07:34:58.2321557Z * [new branch] gh/ezyang/3126/orig -> origin/gh/ezyang/3126/orig 2025-09-07T07:34:58.2321685Z * [new branch] gh/ezyang/3127/base -> origin/gh/ezyang/3127/base 2025-09-07T07:34:58.2321820Z * [new branch] gh/ezyang/3127/head -> origin/gh/ezyang/3127/head 2025-09-07T07:34:58.2321952Z * [new branch] gh/ezyang/3127/orig -> origin/gh/ezyang/3127/orig 2025-09-07T07:34:58.2322123Z * [new branch] gh/ezyang/3128/base -> origin/gh/ezyang/3128/base 2025-09-07T07:34:58.2323087Z * [new branch] gh/ezyang/3128/head -> origin/gh/ezyang/3128/head 2025-09-07T07:34:58.2323298Z * [new branch] gh/ezyang/3128/orig -> origin/gh/ezyang/3128/orig 2025-09-07T07:34:58.2324624Z * [new branch] gh/ezyang/3129/base -> origin/gh/ezyang/3129/base 2025-09-07T07:34:58.2325261Z * [new branch] gh/ezyang/3129/head -> origin/gh/ezyang/3129/head 2025-09-07T07:34:58.2325680Z * [new branch] gh/ezyang/3129/orig -> origin/gh/ezyang/3129/orig 2025-09-07T07:34:58.2327858Z * [new branch] gh/ezyang/3130/base -> origin/gh/ezyang/3130/base 2025-09-07T07:34:58.2328216Z * [new branch] gh/ezyang/3130/head -> origin/gh/ezyang/3130/head 2025-09-07T07:34:58.2328375Z * [new branch] gh/ezyang/3130/orig -> origin/gh/ezyang/3130/orig 2025-09-07T07:34:58.2331554Z * [new branch] gh/ezyang/3131/base -> origin/gh/ezyang/3131/base 2025-09-07T07:34:58.2331723Z * [new branch] gh/ezyang/3131/head -> origin/gh/ezyang/3131/head 2025-09-07T07:34:58.2331861Z * [new branch] gh/ezyang/3131/orig -> origin/gh/ezyang/3131/orig 2025-09-07T07:34:58.2331991Z * [new branch] gh/ezyang/3132/base -> origin/gh/ezyang/3132/base 2025-09-07T07:34:58.2332278Z * [new branch] gh/ezyang/3132/head -> origin/gh/ezyang/3132/head 2025-09-07T07:34:58.2332587Z * [new branch] gh/ezyang/3132/orig -> origin/gh/ezyang/3132/orig 2025-09-07T07:34:58.2332999Z * [new branch] gh/ezyang/3133/base -> origin/gh/ezyang/3133/base 2025-09-07T07:34:58.2335221Z * [new branch] gh/ezyang/3133/head -> origin/gh/ezyang/3133/head 2025-09-07T07:34:58.2335549Z * [new branch] gh/ezyang/3133/orig -> origin/gh/ezyang/3133/orig 2025-09-07T07:34:58.2335747Z * [new branch] gh/ezyang/3134/base -> origin/gh/ezyang/3134/base 2025-09-07T07:34:58.2335891Z * [new branch] gh/ezyang/3134/head -> origin/gh/ezyang/3134/head 2025-09-07T07:34:58.2338213Z * [new branch] gh/ezyang/3134/orig -> origin/gh/ezyang/3134/orig 2025-09-07T07:34:58.2338527Z * [new branch] gh/ezyang/3135/base -> origin/gh/ezyang/3135/base 2025-09-07T07:34:58.2338719Z * [new branch] gh/ezyang/3135/head -> origin/gh/ezyang/3135/head 2025-09-07T07:34:58.2343293Z * [new branch] gh/ezyang/3135/orig -> origin/gh/ezyang/3135/orig 2025-09-07T07:34:58.2343573Z * [new branch] gh/ezyang/3136/base -> origin/gh/ezyang/3136/base 2025-09-07T07:34:58.2343957Z * [new branch] gh/ezyang/3136/head -> origin/gh/ezyang/3136/head 2025-09-07T07:34:58.2344233Z * [new branch] gh/ezyang/3136/orig -> origin/gh/ezyang/3136/orig 2025-09-07T07:34:58.2344393Z * [new branch] gh/ezyang/3137/base -> origin/gh/ezyang/3137/base 2025-09-07T07:34:58.2344623Z * [new branch] gh/ezyang/3137/head -> origin/gh/ezyang/3137/head 2025-09-07T07:34:58.2344770Z * [new branch] gh/ezyang/3137/orig -> origin/gh/ezyang/3137/orig 2025-09-07T07:34:58.2344941Z * [new branch] gh/ezyang/3138/base -> origin/gh/ezyang/3138/base 2025-09-07T07:34:58.2345211Z * [new branch] gh/ezyang/3138/head -> origin/gh/ezyang/3138/head 2025-09-07T07:34:58.2350667Z * [new branch] gh/ezyang/3138/orig -> origin/gh/ezyang/3138/orig 2025-09-07T07:34:58.2351006Z * [new branch] gh/ezyang/3139/base -> origin/gh/ezyang/3139/base 2025-09-07T07:34:58.2356087Z * [new branch] gh/ezyang/3139/head -> origin/gh/ezyang/3139/head 2025-09-07T07:34:58.2356268Z * [new branch] gh/ezyang/3139/orig -> origin/gh/ezyang/3139/orig 2025-09-07T07:34:58.2356416Z * [new branch] gh/ezyang/3140/base -> origin/gh/ezyang/3140/base 2025-09-07T07:34:58.2356550Z * [new branch] gh/ezyang/3140/head -> origin/gh/ezyang/3140/head 2025-09-07T07:34:58.2356688Z * [new branch] gh/ezyang/3140/orig -> origin/gh/ezyang/3140/orig 2025-09-07T07:34:58.2356828Z * [new branch] gh/ezyang/3141/base -> origin/gh/ezyang/3141/base 2025-09-07T07:34:58.2356956Z * [new branch] gh/ezyang/3141/head -> origin/gh/ezyang/3141/head 2025-09-07T07:34:58.2361591Z * [new branch] gh/ezyang/3141/orig -> origin/gh/ezyang/3141/orig 2025-09-07T07:34:58.2361921Z * [new branch] gh/ezyang/3142/base -> origin/gh/ezyang/3142/base 2025-09-07T07:34:58.2362091Z * [new branch] gh/ezyang/3142/head -> origin/gh/ezyang/3142/head 2025-09-07T07:34:58.2362315Z * [new branch] gh/ezyang/3142/orig -> origin/gh/ezyang/3142/orig 2025-09-07T07:34:58.2362579Z * [new branch] gh/ezyang/3143/base -> origin/gh/ezyang/3143/base 2025-09-07T07:34:58.2362717Z * [new branch] gh/ezyang/3143/head -> origin/gh/ezyang/3143/head 2025-09-07T07:34:58.2362981Z * [new branch] gh/ezyang/3143/orig -> origin/gh/ezyang/3143/orig 2025-09-07T07:34:58.2363331Z * [new branch] gh/fadara01/1/base -> origin/gh/fadara01/1/base 2025-09-07T07:34:58.2363477Z * [new branch] gh/fadara01/1/head -> origin/gh/fadara01/1/head 2025-09-07T07:34:58.2363614Z * [new branch] gh/fadara01/1/orig -> origin/gh/fadara01/1/orig 2025-09-07T07:34:58.2363759Z * [new branch] gh/fduwjj/171/base -> origin/gh/fduwjj/171/base 2025-09-07T07:34:58.2363917Z * [new branch] gh/fduwjj/171/head -> origin/gh/fduwjj/171/head 2025-09-07T07:34:58.2364704Z * [new branch] gh/fduwjj/171/orig -> origin/gh/fduwjj/171/orig 2025-09-07T07:34:58.2368458Z * [new branch] gh/fduwjj/175/base -> origin/gh/fduwjj/175/base 2025-09-07T07:34:58.2368780Z * [new branch] gh/fduwjj/175/head -> origin/gh/fduwjj/175/head 2025-09-07T07:34:58.2368932Z * [new branch] gh/fduwjj/175/orig -> origin/gh/fduwjj/175/orig 2025-09-07T07:34:58.2369176Z * [new branch] gh/fduwjj/176/base -> origin/gh/fduwjj/176/base 2025-09-07T07:34:58.2369319Z * [new branch] gh/fduwjj/176/head -> origin/gh/fduwjj/176/head 2025-09-07T07:34:58.2369565Z * [new branch] gh/fduwjj/176/orig -> origin/gh/fduwjj/176/orig 2025-09-07T07:34:58.2372370Z * [new branch] gh/fduwjj/177/base -> origin/gh/fduwjj/177/base 2025-09-07T07:34:58.2372886Z * [new branch] gh/fduwjj/177/head -> origin/gh/fduwjj/177/head 2025-09-07T07:34:58.2373134Z * [new branch] gh/fduwjj/177/orig -> origin/gh/fduwjj/177/orig 2025-09-07T07:34:58.2373404Z * [new branch] gh/fduwjj/178/base -> origin/gh/fduwjj/178/base 2025-09-07T07:34:58.2373541Z * [new branch] gh/fduwjj/178/head -> origin/gh/fduwjj/178/head 2025-09-07T07:34:58.2373911Z * [new branch] gh/fduwjj/178/orig -> origin/gh/fduwjj/178/orig 2025-09-07T07:34:58.2375424Z * [new branch] gh/fduwjj/179/base -> origin/gh/fduwjj/179/base 2025-09-07T07:34:58.2375742Z * [new branch] gh/fduwjj/179/head -> origin/gh/fduwjj/179/head 2025-09-07T07:34:58.2376152Z * [new branch] gh/fduwjj/179/orig -> origin/gh/fduwjj/179/orig 2025-09-07T07:34:58.2378413Z * [new branch] gh/fduwjj/180/base -> origin/gh/fduwjj/180/base 2025-09-07T07:34:58.2378738Z * [new branch] gh/fduwjj/180/head -> origin/gh/fduwjj/180/head 2025-09-07T07:34:58.2378897Z * [new branch] gh/fduwjj/180/orig -> origin/gh/fduwjj/180/orig 2025-09-07T07:34:58.2379285Z * [new branch] gh/fduwjj/181/base -> origin/gh/fduwjj/181/base 2025-09-07T07:34:58.2379980Z * [new branch] gh/fduwjj/181/head -> origin/gh/fduwjj/181/head 2025-09-07T07:34:58.2383169Z * [new branch] gh/fduwjj/181/orig -> origin/gh/fduwjj/181/orig 2025-09-07T07:34:58.2383520Z * [new branch] gh/fduwjj/182/base -> origin/gh/fduwjj/182/base 2025-09-07T07:34:58.2383740Z * [new branch] gh/fduwjj/182/head -> origin/gh/fduwjj/182/head 2025-09-07T07:34:58.2383919Z * [new branch] gh/fduwjj/182/orig -> origin/gh/fduwjj/182/orig 2025-09-07T07:34:58.2384072Z * [new branch] gh/fduwjj/183/base -> origin/gh/fduwjj/183/base 2025-09-07T07:34:58.2384373Z * [new branch] gh/fduwjj/183/head -> origin/gh/fduwjj/183/head 2025-09-07T07:34:58.2389935Z * [new branch] gh/fduwjj/183/orig -> origin/gh/fduwjj/183/orig 2025-09-07T07:34:58.2390173Z * [new branch] gh/fduwjj/184/base -> origin/gh/fduwjj/184/base 2025-09-07T07:34:58.2390398Z * [new branch] gh/fduwjj/184/head -> origin/gh/fduwjj/184/head 2025-09-07T07:34:58.2390567Z * [new branch] gh/fduwjj/184/orig -> origin/gh/fduwjj/184/orig 2025-09-07T07:34:58.2390977Z * [new branch] gh/fduwjj/185/base -> origin/gh/fduwjj/185/base 2025-09-07T07:34:58.2391137Z * [new branch] gh/fduwjj/185/head -> origin/gh/fduwjj/185/head 2025-09-07T07:34:58.2391273Z * [new branch] gh/fduwjj/185/orig -> origin/gh/fduwjj/185/orig 2025-09-07T07:34:58.2391528Z * [new branch] gh/fduwjj/186/base -> origin/gh/fduwjj/186/base 2025-09-07T07:34:58.2391666Z * [new branch] gh/fduwjj/186/head -> origin/gh/fduwjj/186/head 2025-09-07T07:34:58.2391797Z * [new branch] gh/fduwjj/186/orig -> origin/gh/fduwjj/186/orig 2025-09-07T07:34:58.2393746Z * [new branch] gh/fduwjj/187/base -> origin/gh/fduwjj/187/base 2025-09-07T07:34:58.2394106Z * [new branch] gh/fduwjj/187/head -> origin/gh/fduwjj/187/head 2025-09-07T07:34:58.2394394Z * [new branch] gh/fduwjj/187/orig -> origin/gh/fduwjj/187/orig 2025-09-07T07:34:58.2397460Z * [new branch] gh/fduwjj/188/base -> origin/gh/fduwjj/188/base 2025-09-07T07:34:58.2397630Z * [new branch] gh/fduwjj/188/head -> origin/gh/fduwjj/188/head 2025-09-07T07:34:58.2397923Z * [new branch] gh/fduwjj/188/orig -> origin/gh/fduwjj/188/orig 2025-09-07T07:34:58.2398213Z * [new branch] gh/fduwjj/189/base -> origin/gh/fduwjj/189/base 2025-09-07T07:34:58.2398351Z * [new branch] gh/fduwjj/189/head -> origin/gh/fduwjj/189/head 2025-09-07T07:34:58.2398495Z * [new branch] gh/fduwjj/189/orig -> origin/gh/fduwjj/189/orig 2025-09-07T07:34:58.2400111Z * [new branch] gh/fduwjj/190/base -> origin/gh/fduwjj/190/base 2025-09-07T07:34:58.2400300Z * [new branch] gh/fduwjj/190/head -> origin/gh/fduwjj/190/head 2025-09-07T07:34:58.2400714Z * [new branch] gh/fduwjj/190/orig -> origin/gh/fduwjj/190/orig 2025-09-07T07:34:58.2403028Z * [new branch] gh/fduwjj/191/base -> origin/gh/fduwjj/191/base 2025-09-07T07:34:58.2403199Z * [new branch] gh/fduwjj/191/head -> origin/gh/fduwjj/191/head 2025-09-07T07:34:58.2403338Z * [new branch] gh/fduwjj/191/orig -> origin/gh/fduwjj/191/orig 2025-09-07T07:34:58.2404210Z * [new branch] gh/fegin/306/base -> origin/gh/fegin/306/base 2025-09-07T07:34:58.2405085Z * [new branch] gh/fegin/306/head -> origin/gh/fegin/306/head 2025-09-07T07:34:58.2405635Z * [new branch] gh/fegin/306/orig -> origin/gh/fegin/306/orig 2025-09-07T07:34:58.2406874Z * [new branch] gh/fegin/307/base -> origin/gh/fegin/307/base 2025-09-07T07:34:58.2407316Z * [new branch] gh/fegin/307/head -> origin/gh/fegin/307/head 2025-09-07T07:34:58.2407896Z * [new branch] gh/fegin/307/orig -> origin/gh/fegin/307/orig 2025-09-07T07:34:58.2411450Z * [new branch] gh/fegin/308/base -> origin/gh/fegin/308/base 2025-09-07T07:34:58.2411769Z * [new branch] gh/fegin/308/head -> origin/gh/fegin/308/head 2025-09-07T07:34:58.2411925Z * [new branch] gh/fegin/308/orig -> origin/gh/fegin/308/orig 2025-09-07T07:34:58.2412067Z * [new branch] gh/fegin/309/base -> origin/gh/fegin/309/base 2025-09-07T07:34:58.2412324Z * [new branch] gh/fegin/309/head -> origin/gh/fegin/309/head 2025-09-07T07:34:58.2412460Z * [new branch] gh/fegin/309/orig -> origin/gh/fegin/309/orig 2025-09-07T07:34:58.2414226Z * [new branch] gh/fegin/310/base -> origin/gh/fegin/310/base 2025-09-07T07:34:58.2414561Z * [new branch] gh/fegin/310/head -> origin/gh/fegin/310/head 2025-09-07T07:34:58.2415014Z * [new branch] gh/fegin/310/orig -> origin/gh/fegin/310/orig 2025-09-07T07:34:58.2417327Z * [new branch] gh/fegin/311/base -> origin/gh/fegin/311/base 2025-09-07T07:34:58.2417499Z * [new branch] gh/fegin/311/head -> origin/gh/fegin/311/head 2025-09-07T07:34:58.2417630Z * [new branch] gh/fegin/311/orig -> origin/gh/fegin/311/orig 2025-09-07T07:34:58.2418080Z * [new branch] gh/fegin/312/base -> origin/gh/fegin/312/base 2025-09-07T07:34:58.2418571Z * [new branch] gh/fegin/312/head -> origin/gh/fegin/312/head 2025-09-07T07:34:58.2419259Z * [new branch] gh/fegin/312/orig -> origin/gh/fegin/312/orig 2025-09-07T07:34:58.2423666Z * [new branch] gh/fegin/313/base -> origin/gh/fegin/313/base 2025-09-07T07:34:58.2423844Z * [new branch] gh/fegin/313/head -> origin/gh/fegin/313/head 2025-09-07T07:34:58.2423998Z * [new branch] gh/fegin/313/orig -> origin/gh/fegin/313/orig 2025-09-07T07:34:58.2425066Z * [new branch] gh/fffrog/124/base -> origin/gh/fffrog/124/base 2025-09-07T07:34:58.2425555Z * [new branch] gh/fffrog/124/head -> origin/gh/fffrog/124/head 2025-09-07T07:34:58.2425709Z * [new branch] gh/fffrog/124/orig -> origin/gh/fffrog/124/orig 2025-09-07T07:34:58.2425976Z * [new branch] gh/fffrog/129/base -> origin/gh/fffrog/129/base 2025-09-07T07:34:58.2426122Z * [new branch] gh/fffrog/129/head -> origin/gh/fffrog/129/head 2025-09-07T07:34:58.2426256Z * [new branch] gh/fffrog/129/orig -> origin/gh/fffrog/129/orig 2025-09-07T07:34:58.2429910Z * [new branch] gh/fffrog/130/base -> origin/gh/fffrog/130/base 2025-09-07T07:34:58.2430059Z * [new branch] gh/fffrog/130/head -> origin/gh/fffrog/130/head 2025-09-07T07:34:58.2430310Z * [new branch] gh/fffrog/130/orig -> origin/gh/fffrog/130/orig 2025-09-07T07:34:58.2430461Z * [new branch] gh/fffrog/131/base -> origin/gh/fffrog/131/base 2025-09-07T07:34:58.2430678Z * [new branch] gh/fffrog/131/head -> origin/gh/fffrog/131/head 2025-09-07T07:34:58.2430821Z * [new branch] gh/fffrog/131/orig -> origin/gh/fffrog/131/orig 2025-09-07T07:34:58.2431051Z * [new branch] gh/fffrog/132/base -> origin/gh/fffrog/132/base 2025-09-07T07:34:58.2437570Z * [new branch] gh/fffrog/132/head -> origin/gh/fffrog/132/head 2025-09-07T07:34:58.2437764Z * [new branch] gh/fffrog/132/orig -> origin/gh/fffrog/132/orig 2025-09-07T07:34:58.2437921Z * [new branch] gh/fffrog/133/base -> origin/gh/fffrog/133/base 2025-09-07T07:34:58.2438065Z * [new branch] gh/fffrog/133/head -> origin/gh/fffrog/133/head 2025-09-07T07:34:58.2438247Z * [new branch] gh/fffrog/133/orig -> origin/gh/fffrog/133/orig 2025-09-07T07:34:58.2438377Z * [new branch] gh/fffrog/134/base -> origin/gh/fffrog/134/base 2025-09-07T07:34:58.2438514Z * [new branch] gh/fffrog/134/head -> origin/gh/fffrog/134/head 2025-09-07T07:34:58.2438645Z * [new branch] gh/fffrog/134/orig -> origin/gh/fffrog/134/orig 2025-09-07T07:34:58.2438797Z * [new branch] gh/fffrog/135/base -> origin/gh/fffrog/135/base 2025-09-07T07:34:58.2439979Z * [new branch] gh/fffrog/135/head -> origin/gh/fffrog/135/head 2025-09-07T07:34:58.2440133Z * [new branch] gh/fffrog/135/orig -> origin/gh/fffrog/135/orig 2025-09-07T07:34:58.2440279Z * [new branch] gh/fffrog/136/base -> origin/gh/fffrog/136/base 2025-09-07T07:34:58.2440415Z * [new branch] gh/fffrog/136/head -> origin/gh/fffrog/136/head 2025-09-07T07:34:58.2440702Z * [new branch] gh/fffrog/136/orig -> origin/gh/fffrog/136/orig 2025-09-07T07:34:58.2440838Z * [new branch] gh/fffrog/137/base -> origin/gh/fffrog/137/base 2025-09-07T07:34:58.2440974Z * [new branch] gh/fffrog/137/head -> origin/gh/fffrog/137/head 2025-09-07T07:34:58.2444186Z * [new branch] gh/fffrog/137/orig -> origin/gh/fffrog/137/orig 2025-09-07T07:34:58.2444421Z * [new branch] gh/fffrog/138/base -> origin/gh/fffrog/138/base 2025-09-07T07:34:58.2445133Z * [new branch] gh/fffrog/138/head -> origin/gh/fffrog/138/head 2025-09-07T07:34:58.2445448Z * [new branch] gh/fffrog/138/orig -> origin/gh/fffrog/138/orig 2025-09-07T07:34:58.2445745Z * [new branch] gh/fffrog/139/base -> origin/gh/fffrog/139/base 2025-09-07T07:34:58.2445908Z * [new branch] gh/fffrog/139/head -> origin/gh/fffrog/139/head 2025-09-07T07:34:58.2446075Z * [new branch] gh/fffrog/139/orig -> origin/gh/fffrog/139/orig 2025-09-07T07:34:58.2446549Z * [new branch] gh/fffrog/140/base -> origin/gh/fffrog/140/base 2025-09-07T07:34:58.2447097Z * [new branch] gh/fffrog/140/head -> origin/gh/fffrog/140/head 2025-09-07T07:34:58.2451562Z * [new branch] gh/fffrog/140/orig -> origin/gh/fffrog/140/orig 2025-09-07T07:34:58.2451897Z * [new branch] gh/fffrog/141/base -> origin/gh/fffrog/141/base 2025-09-07T07:34:58.2452173Z * [new branch] gh/fffrog/141/head -> origin/gh/fffrog/141/head 2025-09-07T07:34:58.2452321Z * [new branch] gh/fffrog/141/orig -> origin/gh/fffrog/141/orig 2025-09-07T07:34:58.2452550Z * [new branch] gh/fffrog/142/base -> origin/gh/fffrog/142/base 2025-09-07T07:34:58.2452814Z * [new branch] gh/fffrog/142/head -> origin/gh/fffrog/142/head 2025-09-07T07:34:58.2453288Z * [new branch] gh/fffrog/142/orig -> origin/gh/fffrog/142/orig 2025-09-07T07:34:58.2457788Z * [new branch] gh/fffrog/143/base -> origin/gh/fffrog/143/base 2025-09-07T07:34:58.2457977Z * [new branch] gh/fffrog/143/head -> origin/gh/fffrog/143/head 2025-09-07T07:34:58.2458146Z * [new branch] gh/fffrog/143/orig -> origin/gh/fffrog/143/orig 2025-09-07T07:34:58.2458296Z * [new branch] gh/fffrog/144/base -> origin/gh/fffrog/144/base 2025-09-07T07:34:58.2458441Z * [new branch] gh/fffrog/144/head -> origin/gh/fffrog/144/head 2025-09-07T07:34:58.2458585Z * [new branch] gh/fffrog/144/orig -> origin/gh/fffrog/144/orig 2025-09-07T07:34:58.2458731Z * [new branch] gh/fffrog/145/base -> origin/gh/fffrog/145/base 2025-09-07T07:34:58.2458870Z * [new branch] gh/fffrog/145/head -> origin/gh/fffrog/145/head 2025-09-07T07:34:58.2459039Z * [new branch] gh/fffrog/145/orig -> origin/gh/fffrog/145/orig 2025-09-07T07:34:58.2462317Z * [new branch] gh/fffrog/146/base -> origin/gh/fffrog/146/base 2025-09-07T07:34:58.2462630Z * [new branch] gh/fffrog/146/head -> origin/gh/fffrog/146/head 2025-09-07T07:34:58.2462805Z * [new branch] gh/fffrog/146/orig -> origin/gh/fffrog/146/orig 2025-09-07T07:34:58.2462934Z * [new branch] gh/fffrog/147/base -> origin/gh/fffrog/147/base 2025-09-07T07:34:58.2463059Z * [new branch] gh/fffrog/147/head -> origin/gh/fffrog/147/head 2025-09-07T07:34:58.2463190Z * [new branch] gh/fffrog/147/orig -> origin/gh/fffrog/147/orig 2025-09-07T07:34:58.2469843Z * [new branch] gh/fffrog/148/base -> origin/gh/fffrog/148/base 2025-09-07T07:34:58.2470027Z * [new branch] gh/fffrog/148/head -> origin/gh/fffrog/148/head 2025-09-07T07:34:58.2470391Z * [new branch] gh/fffrog/148/orig -> origin/gh/fffrog/148/orig 2025-09-07T07:34:58.2470529Z * [new branch] gh/fffrog/149/base -> origin/gh/fffrog/149/base 2025-09-07T07:34:58.2470669Z * [new branch] gh/fffrog/149/head -> origin/gh/fffrog/149/head 2025-09-07T07:34:58.2470823Z * [new branch] gh/fffrog/149/orig -> origin/gh/fffrog/149/orig 2025-09-07T07:34:58.2470958Z * [new branch] gh/fffrog/150/base -> origin/gh/fffrog/150/base 2025-09-07T07:34:58.2471099Z * [new branch] gh/fffrog/150/head -> origin/gh/fffrog/150/head 2025-09-07T07:34:58.2476840Z * [new branch] gh/fffrog/150/orig -> origin/gh/fffrog/150/orig 2025-09-07T07:34:58.2477154Z * [new branch] gh/fffrog/151/base -> origin/gh/fffrog/151/base 2025-09-07T07:34:58.2477454Z * [new branch] gh/fffrog/151/head -> origin/gh/fffrog/151/head 2025-09-07T07:34:58.2477623Z * [new branch] gh/fffrog/151/orig -> origin/gh/fffrog/151/orig 2025-09-07T07:34:58.2477755Z * [new branch] gh/fffrog/152/base -> origin/gh/fffrog/152/base 2025-09-07T07:34:58.2477898Z * [new branch] gh/fffrog/152/head -> origin/gh/fffrog/152/head 2025-09-07T07:34:58.2478188Z * [new branch] gh/fffrog/153/base -> origin/gh/fffrog/153/base 2025-09-07T07:34:58.2478328Z * [new branch] gh/fffrog/153/head -> origin/gh/fffrog/153/head 2025-09-07T07:34:58.2478453Z * [new branch] gh/fffrog/153/orig -> origin/gh/fffrog/153/orig 2025-09-07T07:34:58.2478601Z * [new branch] gh/gmagogsfm/1/base -> origin/gh/gmagogsfm/1/base 2025-09-07T07:34:58.2478748Z * [new branch] gh/gmagogsfm/1/head -> origin/gh/gmagogsfm/1/head 2025-09-07T07:34:58.2478880Z * [new branch] gh/gmagogsfm/1/orig -> origin/gh/gmagogsfm/1/orig 2025-09-07T07:34:58.2479016Z * [new branch] gh/gmagogsfm/2/base -> origin/gh/gmagogsfm/2/base 2025-09-07T07:34:58.2479145Z * [new branch] gh/gmagogsfm/2/head -> origin/gh/gmagogsfm/2/head 2025-09-07T07:34:58.2479273Z * [new branch] gh/gmagogsfm/2/orig -> origin/gh/gmagogsfm/2/orig 2025-09-07T07:34:58.2482108Z * [new branch] gh/gmagogsfm/3/base -> origin/gh/gmagogsfm/3/base 2025-09-07T07:34:58.2482422Z * [new branch] gh/gmagogsfm/3/head -> origin/gh/gmagogsfm/3/head 2025-09-07T07:34:58.2482575Z * [new branch] gh/gmagogsfm/3/orig -> origin/gh/gmagogsfm/3/orig 2025-09-07T07:34:58.2482719Z * [new branch] gh/guangyey/134/base -> origin/gh/guangyey/134/base 2025-09-07T07:34:58.2482860Z * [new branch] gh/guangyey/134/head -> origin/gh/guangyey/134/head 2025-09-07T07:34:58.2483033Z * [new branch] gh/guangyey/134/orig -> origin/gh/guangyey/134/orig 2025-09-07T07:34:58.2483189Z * [new branch] gh/guangyey/135/base -> origin/gh/guangyey/135/base 2025-09-07T07:34:58.2484725Z * [new branch] gh/guangyey/135/head -> origin/gh/guangyey/135/head 2025-09-07T07:34:58.2484925Z * [new branch] gh/guangyey/135/orig -> origin/gh/guangyey/135/orig 2025-09-07T07:34:58.2485520Z * [new branch] gh/guangyey/139/base -> origin/gh/guangyey/139/base 2025-09-07T07:34:58.2486207Z * [new branch] gh/guangyey/139/head -> origin/gh/guangyey/139/head 2025-09-07T07:34:58.2487302Z * [new branch] gh/guangyey/139/orig -> origin/gh/guangyey/139/orig 2025-09-07T07:34:58.2487919Z * [new branch] gh/guangyey/140/base -> origin/gh/guangyey/140/base 2025-09-07T07:34:58.2488511Z * [new branch] gh/guangyey/140/head -> origin/gh/guangyey/140/head 2025-09-07T07:34:58.2489204Z * [new branch] gh/guangyey/140/orig -> origin/gh/guangyey/140/orig 2025-09-07T07:34:58.2490329Z * [new branch] gh/guangyey/142/base -> origin/gh/guangyey/142/base 2025-09-07T07:34:58.2490540Z * [new branch] gh/guangyey/142/head -> origin/gh/guangyey/142/head 2025-09-07T07:34:58.2493641Z * [new branch] gh/guangyey/142/orig -> origin/gh/guangyey/142/orig 2025-09-07T07:34:58.2493836Z * [new branch] gh/guangyey/145/base -> origin/gh/guangyey/145/base 2025-09-07T07:34:58.2493978Z * [new branch] gh/guangyey/145/head -> origin/gh/guangyey/145/head 2025-09-07T07:34:58.2494120Z * [new branch] gh/guangyey/145/orig -> origin/gh/guangyey/145/orig 2025-09-07T07:34:58.2494287Z * [new branch] gh/guangyey/153/base -> origin/gh/guangyey/153/base 2025-09-07T07:34:58.2494968Z * [new branch] gh/guangyey/153/head -> origin/gh/guangyey/153/head 2025-09-07T07:34:58.2495732Z * [new branch] gh/guangyey/153/orig -> origin/gh/guangyey/153/orig 2025-09-07T07:34:58.2497106Z * [new branch] gh/guangyey/159/base -> origin/gh/guangyey/159/base 2025-09-07T07:34:58.2497250Z * [new branch] gh/guangyey/159/head -> origin/gh/guangyey/159/head 2025-09-07T07:34:58.2498194Z * [new branch] gh/guangyey/159/orig -> origin/gh/guangyey/159/orig 2025-09-07T07:34:58.2498783Z * [new branch] gh/guangyey/163/base -> origin/gh/guangyey/163/base 2025-09-07T07:34:58.2499634Z * [new branch] gh/guangyey/163/head -> origin/gh/guangyey/163/head 2025-09-07T07:34:58.2500112Z * [new branch] gh/guangyey/163/orig -> origin/gh/guangyey/163/orig 2025-09-07T07:34:58.2504540Z * [new branch] gh/guangyey/168/base -> origin/gh/guangyey/168/base 2025-09-07T07:34:58.2504738Z * [new branch] gh/guangyey/168/head -> origin/gh/guangyey/168/head 2025-09-07T07:34:58.2504898Z * [new branch] gh/guangyey/168/orig -> origin/gh/guangyey/168/orig 2025-09-07T07:34:58.2505042Z * [new branch] gh/guangyey/169/base -> origin/gh/guangyey/169/base 2025-09-07T07:34:58.2505182Z * [new branch] gh/guangyey/169/head -> origin/gh/guangyey/169/head 2025-09-07T07:34:58.2505325Z * [new branch] gh/guangyey/169/orig -> origin/gh/guangyey/169/orig 2025-09-07T07:34:58.2505528Z * [new branch] gh/guangyey/170/base -> origin/gh/guangyey/170/base 2025-09-07T07:34:58.2506915Z * [new branch] gh/guangyey/170/head -> origin/gh/guangyey/170/head 2025-09-07T07:34:58.2507249Z * [new branch] gh/guangyey/170/orig -> origin/gh/guangyey/170/orig 2025-09-07T07:34:58.2507687Z * [new branch] gh/guangyey/171/base -> origin/gh/guangyey/171/base 2025-09-07T07:34:58.2510636Z * [new branch] gh/guangyey/171/head -> origin/gh/guangyey/171/head 2025-09-07T07:34:58.2510978Z * [new branch] gh/guangyey/171/orig -> origin/gh/guangyey/171/orig 2025-09-07T07:34:58.2511195Z * [new branch] gh/guangyey/174/base -> origin/gh/guangyey/174/base 2025-09-07T07:34:58.2511369Z * [new branch] gh/guangyey/174/head -> origin/gh/guangyey/174/head 2025-09-07T07:34:58.2511552Z * [new branch] gh/guangyey/174/orig -> origin/gh/guangyey/174/orig 2025-09-07T07:34:58.2511966Z * [new branch] gh/guangyey/176/base -> origin/gh/guangyey/176/base 2025-09-07T07:34:58.2512888Z * [new branch] gh/guangyey/176/head -> origin/gh/guangyey/176/head 2025-09-07T07:34:58.2516095Z * [new branch] gh/guangyey/176/orig -> origin/gh/guangyey/176/orig 2025-09-07T07:34:58.2516432Z * [new branch] gh/guangyey/178/base -> origin/gh/guangyey/178/base 2025-09-07T07:34:58.2516871Z * [new branch] gh/guangyey/178/head -> origin/gh/guangyey/178/head 2025-09-07T07:34:58.2517058Z * [new branch] gh/guangyey/178/orig -> origin/gh/guangyey/178/orig 2025-09-07T07:34:58.2517211Z * [new branch] gh/guangyey/181/base -> origin/gh/guangyey/181/base 2025-09-07T07:34:58.2517432Z * [new branch] gh/guangyey/181/head -> origin/gh/guangyey/181/head 2025-09-07T07:34:58.2518606Z * [new branch] gh/guangyey/181/orig -> origin/gh/guangyey/181/orig 2025-09-07T07:34:58.2519117Z * [new branch] gh/guangyey/182/base -> origin/gh/guangyey/182/base 2025-09-07T07:34:58.2519627Z * [new branch] gh/guangyey/182/head -> origin/gh/guangyey/182/head 2025-09-07T07:34:58.2520340Z * [new branch] gh/guangyey/182/orig -> origin/gh/guangyey/182/orig 2025-09-07T07:34:58.2521424Z * [new branch] gh/guangyey/183/base -> origin/gh/guangyey/183/base 2025-09-07T07:34:58.2521614Z * [new branch] gh/guangyey/183/head -> origin/gh/guangyey/183/head 2025-09-07T07:34:58.2522853Z * [new branch] gh/guangyey/183/orig -> origin/gh/guangyey/183/orig 2025-09-07T07:34:58.2523695Z * [new branch] gh/guangyey/184/base -> origin/gh/guangyey/184/base 2025-09-07T07:34:58.2524331Z * [new branch] gh/guangyey/184/head -> origin/gh/guangyey/184/head 2025-09-07T07:34:58.2524840Z * [new branch] gh/guangyey/184/orig -> origin/gh/guangyey/184/orig 2025-09-07T07:34:58.2526027Z * [new branch] gh/guangyey/185/base -> origin/gh/guangyey/185/base 2025-09-07T07:34:58.2526320Z * [new branch] gh/guangyey/185/head -> origin/gh/guangyey/185/head 2025-09-07T07:34:58.2530009Z * [new branch] gh/guangyey/185/orig -> origin/gh/guangyey/185/orig 2025-09-07T07:34:58.2530177Z * [new branch] gh/guangyey/186/base -> origin/gh/guangyey/186/base 2025-09-07T07:34:58.2530333Z * [new branch] gh/guangyey/186/head -> origin/gh/guangyey/186/head 2025-09-07T07:34:58.2530466Z * [new branch] gh/guangyey/186/orig -> origin/gh/guangyey/186/orig 2025-09-07T07:34:58.2530634Z * [new branch] gh/guangyey/187/base -> origin/gh/guangyey/187/base 2025-09-07T07:34:58.2530874Z * [new branch] gh/guangyey/187/head -> origin/gh/guangyey/187/head 2025-09-07T07:34:58.2535372Z * [new branch] gh/guangyey/187/orig -> origin/gh/guangyey/187/orig 2025-09-07T07:34:58.2535714Z * [new branch] gh/guangyey/188/base -> origin/gh/guangyey/188/base 2025-09-07T07:34:58.2535876Z * [new branch] gh/guangyey/188/head -> origin/gh/guangyey/188/head 2025-09-07T07:34:58.2536090Z * [new branch] gh/guangyey/188/orig -> origin/gh/guangyey/188/orig 2025-09-07T07:34:58.2536271Z * [new branch] gh/guangyey/189/base -> origin/gh/guangyey/189/base 2025-09-07T07:34:58.2536514Z * [new branch] gh/guangyey/189/head -> origin/gh/guangyey/189/head 2025-09-07T07:34:58.2536666Z * [new branch] gh/guangyey/189/orig -> origin/gh/guangyey/189/orig 2025-09-07T07:34:58.2537464Z * [new branch] gh/guangyey/190/base -> origin/gh/guangyey/190/base 2025-09-07T07:34:58.2537817Z * [new branch] gh/guangyey/190/head -> origin/gh/guangyey/190/head 2025-09-07T07:34:58.2538300Z * [new branch] gh/guangyey/190/orig -> origin/gh/guangyey/190/orig 2025-09-07T07:34:58.2538774Z * [new branch] gh/guangyey/191/base -> origin/gh/guangyey/191/base 2025-09-07T07:34:58.2542177Z * [new branch] gh/guangyey/191/head -> origin/gh/guangyey/191/head 2025-09-07T07:34:58.2542515Z * [new branch] gh/guangyey/191/orig -> origin/gh/guangyey/191/orig 2025-09-07T07:34:58.2542939Z * [new branch] gh/guangyey/192/base -> origin/gh/guangyey/192/base 2025-09-07T07:34:58.2543316Z * [new branch] gh/guangyey/192/head -> origin/gh/guangyey/192/head 2025-09-07T07:34:58.2543870Z * [new branch] gh/guangyey/192/orig -> origin/gh/guangyey/192/orig 2025-09-07T07:34:58.2544049Z * [new branch] gh/guangyey/193/base -> origin/gh/guangyey/193/base 2025-09-07T07:34:58.2544225Z * [new branch] gh/guangyey/193/head -> origin/gh/guangyey/193/head 2025-09-07T07:34:58.2544487Z * [new branch] gh/guangyey/193/orig -> origin/gh/guangyey/193/orig 2025-09-07T07:34:58.2547744Z * [new branch] gh/guangyey/194/base -> origin/gh/guangyey/194/base 2025-09-07T07:34:58.2548079Z * [new branch] gh/guangyey/194/head -> origin/gh/guangyey/194/head 2025-09-07T07:34:58.2548243Z * [new branch] gh/guangyey/194/orig -> origin/gh/guangyey/194/orig 2025-09-07T07:34:58.2548404Z * [new branch] gh/guangyey/195/base -> origin/gh/guangyey/195/base 2025-09-07T07:34:58.2548678Z * [new branch] gh/guangyey/195/head -> origin/gh/guangyey/195/head 2025-09-07T07:34:58.2548892Z * [new branch] gh/guangyey/195/orig -> origin/gh/guangyey/195/orig 2025-09-07T07:34:58.2554096Z * [new branch] gh/guangyey/196/base -> origin/gh/guangyey/196/base 2025-09-07T07:34:58.2554464Z * [new branch] gh/guangyey/196/head -> origin/gh/guangyey/196/head 2025-09-07T07:34:58.2554693Z * [new branch] gh/guangyey/196/orig -> origin/gh/guangyey/196/orig 2025-09-07T07:34:58.2554854Z * [new branch] gh/guangyey/197/base -> origin/gh/guangyey/197/base 2025-09-07T07:34:58.2554986Z * [new branch] gh/guangyey/197/head -> origin/gh/guangyey/197/head 2025-09-07T07:34:58.2555242Z * [new branch] gh/guangyey/197/orig -> origin/gh/guangyey/197/orig 2025-09-07T07:34:58.2555411Z * [new branch] gh/guangyey/198/base -> origin/gh/guangyey/198/base 2025-09-07T07:34:58.2555641Z * [new branch] gh/guangyey/198/head -> origin/gh/guangyey/198/head 2025-09-07T07:34:58.2555801Z * [new branch] gh/guangyey/198/orig -> origin/gh/guangyey/198/orig 2025-09-07T07:34:58.2557050Z * [new branch] gh/guangyey/199/base -> origin/gh/guangyey/199/base 2025-09-07T07:34:58.2557392Z * [new branch] gh/guangyey/199/head -> origin/gh/guangyey/199/head 2025-09-07T07:34:58.2557777Z * [new branch] gh/guangyey/199/orig -> origin/gh/guangyey/199/orig 2025-09-07T07:34:58.2560126Z * [new branch] gh/guangyey/200/base -> origin/gh/guangyey/200/base 2025-09-07T07:34:58.2560467Z * [new branch] gh/guangyey/200/head -> origin/gh/guangyey/200/head 2025-09-07T07:34:58.2560726Z * [new branch] gh/guangyey/200/orig -> origin/gh/guangyey/200/orig 2025-09-07T07:34:58.2562067Z * [new branch] gh/guangyey/201/base -> origin/gh/guangyey/201/base 2025-09-07T07:34:58.2562269Z * [new branch] gh/guangyey/201/head -> origin/gh/guangyey/201/head 2025-09-07T07:34:58.2562941Z * [new branch] gh/guangyey/201/orig -> origin/gh/guangyey/201/orig 2025-09-07T07:34:58.2564215Z * [new branch] gh/guangyey/202/base -> origin/gh/guangyey/202/base 2025-09-07T07:34:58.2564447Z * [new branch] gh/guangyey/202/head -> origin/gh/guangyey/202/head 2025-09-07T07:34:58.2565569Z * [new branch] gh/guangyey/202/orig -> origin/gh/guangyey/202/orig 2025-09-07T07:34:58.2566493Z * [new branch] gh/guangyey/203/base -> origin/gh/guangyey/203/base 2025-09-07T07:34:58.2566954Z * [new branch] gh/guangyey/203/head -> origin/gh/guangyey/203/head 2025-09-07T07:34:58.2567636Z * [new branch] gh/guangyey/203/orig -> origin/gh/guangyey/203/orig 2025-09-07T07:34:58.2570922Z * [new branch] gh/guangyey/204/base -> origin/gh/guangyey/204/base 2025-09-07T07:34:58.2571094Z * [new branch] gh/guangyey/204/head -> origin/gh/guangyey/204/head 2025-09-07T07:34:58.2571230Z * [new branch] gh/guangyey/204/orig -> origin/gh/guangyey/204/orig 2025-09-07T07:34:58.2571387Z * [new branch] gh/guangyey/205/base -> origin/gh/guangyey/205/base 2025-09-07T07:34:58.2571596Z * [new branch] gh/guangyey/205/head -> origin/gh/guangyey/205/head 2025-09-07T07:34:58.2571983Z * [new branch] gh/guangyey/205/orig -> origin/gh/guangyey/205/orig 2025-09-07T07:34:58.2577725Z * [new branch] gh/guangyey/206/base -> origin/gh/guangyey/206/base 2025-09-07T07:34:58.2577949Z * [new branch] gh/guangyey/206/head -> origin/gh/guangyey/206/head 2025-09-07T07:34:58.2578292Z * [new branch] gh/guangyey/206/orig -> origin/gh/guangyey/206/orig 2025-09-07T07:34:58.2578617Z * [new branch] gh/guangyey/207/base -> origin/gh/guangyey/207/base 2025-09-07T07:34:58.2579049Z * [new branch] gh/guangyey/207/head -> origin/gh/guangyey/207/head 2025-09-07T07:34:58.2579284Z * [new branch] gh/guangyey/207/orig -> origin/gh/guangyey/207/orig 2025-09-07T07:34:58.2579453Z * [new branch] gh/guangyey/79/base -> origin/gh/guangyey/79/base 2025-09-07T07:34:58.2579614Z * [new branch] gh/guangyey/79/head -> origin/gh/guangyey/79/head 2025-09-07T07:34:58.2579760Z * [new branch] gh/guangyey/79/orig -> origin/gh/guangyey/79/orig 2025-09-07T07:34:58.2583497Z * [new branch] gh/guangyey/89/base -> origin/gh/guangyey/89/base 2025-09-07T07:34:58.2583736Z * [new branch] gh/guangyey/89/head -> origin/gh/guangyey/89/head 2025-09-07T07:34:58.2583899Z * [new branch] gh/guangyey/89/orig -> origin/gh/guangyey/89/orig 2025-09-07T07:34:58.2584072Z * [new branch] gh/guilhermeleobas/107/base -> origin/gh/guilhermeleobas/107/base 2025-09-07T07:34:58.2584365Z * [new branch] gh/guilhermeleobas/107/head -> origin/gh/guilhermeleobas/107/head 2025-09-07T07:34:58.2584554Z * [new branch] gh/guilhermeleobas/107/orig -> origin/gh/guilhermeleobas/107/orig 2025-09-07T07:34:58.2584822Z * [new branch] gh/guilhermeleobas/108/base -> origin/gh/guilhermeleobas/108/base 2025-09-07T07:34:58.2585000Z * [new branch] gh/guilhermeleobas/108/head -> origin/gh/guilhermeleobas/108/head 2025-09-07T07:34:58.2588579Z * [new branch] gh/guilhermeleobas/108/orig -> origin/gh/guilhermeleobas/108/orig 2025-09-07T07:34:58.2588917Z * [new branch] gh/guilhermeleobas/124/base -> origin/gh/guilhermeleobas/124/base 2025-09-07T07:34:58.2589212Z * [new branch] gh/guilhermeleobas/124/head -> origin/gh/guilhermeleobas/124/head 2025-09-07T07:34:58.2589396Z * [new branch] gh/guilhermeleobas/124/orig -> origin/gh/guilhermeleobas/124/orig 2025-09-07T07:34:58.2589592Z * [new branch] gh/guilhermeleobas/147/base -> origin/gh/guilhermeleobas/147/base 2025-09-07T07:34:58.2589791Z * [new branch] gh/guilhermeleobas/147/head -> origin/gh/guilhermeleobas/147/head 2025-09-07T07:34:58.2590024Z * [new branch] gh/guilhermeleobas/147/orig -> origin/gh/guilhermeleobas/147/orig 2025-09-07T07:34:58.2595077Z * [new branch] gh/guilhermeleobas/150/base -> origin/gh/guilhermeleobas/150/base 2025-09-07T07:34:58.2599924Z * [new branch] gh/guilhermeleobas/150/head -> origin/gh/guilhermeleobas/150/head 2025-09-07T07:34:58.2602384Z * [new branch] gh/guilhermeleobas/150/orig -> origin/gh/guilhermeleobas/150/orig 2025-09-07T07:34:58.2602826Z * [new branch] gh/guilhermeleobas/163/base -> origin/gh/guilhermeleobas/163/base 2025-09-07T07:34:58.2603438Z * [new branch] gh/guilhermeleobas/163/head -> origin/gh/guilhermeleobas/163/head 2025-09-07T07:34:58.2603846Z * [new branch] gh/guilhermeleobas/163/orig -> origin/gh/guilhermeleobas/163/orig 2025-09-07T07:34:58.2604030Z * [new branch] gh/guilhermeleobas/164/base -> origin/gh/guilhermeleobas/164/base 2025-09-07T07:34:58.2604209Z * [new branch] gh/guilhermeleobas/164/head -> origin/gh/guilhermeleobas/164/head 2025-09-07T07:34:58.2604374Z * [new branch] gh/guilhermeleobas/164/orig -> origin/gh/guilhermeleobas/164/orig 2025-09-07T07:34:58.2604536Z * [new branch] gh/guilhermeleobas/165/base -> origin/gh/guilhermeleobas/165/base 2025-09-07T07:34:58.2604707Z * [new branch] gh/guilhermeleobas/165/head -> origin/gh/guilhermeleobas/165/head 2025-09-07T07:34:58.2604877Z * [new branch] gh/guilhermeleobas/165/orig -> origin/gh/guilhermeleobas/165/orig 2025-09-07T07:34:58.2605047Z * [new branch] gh/guilhermeleobas/166/base -> origin/gh/guilhermeleobas/166/base 2025-09-07T07:34:58.2605207Z * [new branch] gh/guilhermeleobas/166/head -> origin/gh/guilhermeleobas/166/head 2025-09-07T07:34:58.2605375Z * [new branch] gh/guilhermeleobas/166/orig -> origin/gh/guilhermeleobas/166/orig 2025-09-07T07:34:58.2605603Z * [new branch] gh/guilhermeleobas/167/base -> origin/gh/guilhermeleobas/167/base 2025-09-07T07:34:58.2605767Z * [new branch] gh/guilhermeleobas/167/head -> origin/gh/guilhermeleobas/167/head 2025-09-07T07:34:58.2605938Z * [new branch] gh/guilhermeleobas/167/orig -> origin/gh/guilhermeleobas/167/orig 2025-09-07T07:34:58.2606103Z * [new branch] gh/guilhermeleobas/168/base -> origin/gh/guilhermeleobas/168/base 2025-09-07T07:34:58.2606336Z * [new branch] gh/guilhermeleobas/168/head -> origin/gh/guilhermeleobas/168/head 2025-09-07T07:34:58.2606507Z * [new branch] gh/guilhermeleobas/168/orig -> origin/gh/guilhermeleobas/168/orig 2025-09-07T07:34:58.2606674Z * [new branch] gh/guilhermeleobas/169/base -> origin/gh/guilhermeleobas/169/base 2025-09-07T07:34:58.2607034Z * [new branch] gh/guilhermeleobas/169/head -> origin/gh/guilhermeleobas/169/head 2025-09-07T07:34:58.2607213Z * [new branch] gh/guilhermeleobas/169/orig -> origin/gh/guilhermeleobas/169/orig 2025-09-07T07:34:58.2607389Z * [new branch] gh/guilhermeleobas/170/base -> origin/gh/guilhermeleobas/170/base 2025-09-07T07:34:58.2607556Z * [new branch] gh/guilhermeleobas/170/head -> origin/gh/guilhermeleobas/170/head 2025-09-07T07:34:58.2610471Z * [new branch] gh/guilhermeleobas/170/orig -> origin/gh/guilhermeleobas/170/orig 2025-09-07T07:34:58.2610729Z * [new branch] gh/guilhermeleobas/171/base -> origin/gh/guilhermeleobas/171/base 2025-09-07T07:34:58.2610940Z * [new branch] gh/guilhermeleobas/171/head -> origin/gh/guilhermeleobas/171/head 2025-09-07T07:34:58.2611635Z * [new branch] gh/guilhermeleobas/171/orig -> origin/gh/guilhermeleobas/171/orig 2025-09-07T07:34:58.2612238Z * [new branch] gh/guilhermeleobas/173/base -> origin/gh/guilhermeleobas/173/base 2025-09-07T07:34:58.2612529Z * [new branch] gh/guilhermeleobas/173/head -> origin/gh/guilhermeleobas/173/head 2025-09-07T07:34:58.2612727Z * [new branch] gh/guilhermeleobas/173/orig -> origin/gh/guilhermeleobas/173/orig 2025-09-07T07:34:58.2612894Z * [new branch] gh/guilhermeleobas/192/base -> origin/gh/guilhermeleobas/192/base 2025-09-07T07:34:58.2617135Z * [new branch] gh/guilhermeleobas/192/head -> origin/gh/guilhermeleobas/192/head 2025-09-07T07:34:58.2617318Z * [new branch] gh/guilhermeleobas/192/orig -> origin/gh/guilhermeleobas/192/orig 2025-09-07T07:34:58.2617716Z * [new branch] gh/guilhermeleobas/193/base -> origin/gh/guilhermeleobas/193/base 2025-09-07T07:34:58.2624590Z * [new branch] gh/guilhermeleobas/193/head -> origin/gh/guilhermeleobas/193/head 2025-09-07T07:34:58.2629091Z * [new branch] gh/guilhermeleobas/193/orig -> origin/gh/guilhermeleobas/193/orig 2025-09-07T07:34:58.2632750Z * [new branch] gh/guilhermeleobas/194/base -> origin/gh/guilhermeleobas/194/base 2025-09-07T07:34:58.2638076Z * [new branch] gh/guilhermeleobas/194/head -> origin/gh/guilhermeleobas/194/head 2025-09-07T07:34:58.2642623Z * [new branch] gh/guilhermeleobas/194/orig -> origin/gh/guilhermeleobas/194/orig 2025-09-07T07:34:58.2643066Z * [new branch] gh/guilhermeleobas/203/base -> origin/gh/guilhermeleobas/203/base 2025-09-07T07:34:58.2643248Z * [new branch] gh/guilhermeleobas/203/head -> origin/gh/guilhermeleobas/203/head 2025-09-07T07:34:58.2643410Z * [new branch] gh/guilhermeleobas/203/orig -> origin/gh/guilhermeleobas/203/orig 2025-09-07T07:34:58.2643578Z * [new branch] gh/guilhermeleobas/204/base -> origin/gh/guilhermeleobas/204/base 2025-09-07T07:34:58.2643740Z * [new branch] gh/guilhermeleobas/204/head -> origin/gh/guilhermeleobas/204/head 2025-09-07T07:34:58.2643918Z * [new branch] gh/guilhermeleobas/204/orig -> origin/gh/guilhermeleobas/204/orig 2025-09-07T07:34:58.2644244Z * [new branch] gh/guilhermeleobas/205/base -> origin/gh/guilhermeleobas/205/base 2025-09-07T07:34:58.2644408Z * [new branch] gh/guilhermeleobas/205/head -> origin/gh/guilhermeleobas/205/head 2025-09-07T07:34:58.2644566Z * [new branch] gh/guilhermeleobas/205/orig -> origin/gh/guilhermeleobas/205/orig 2025-09-07T07:34:58.2644733Z * [new branch] gh/guilhermeleobas/209/base -> origin/gh/guilhermeleobas/209/base 2025-09-07T07:34:58.2644890Z * [new branch] gh/guilhermeleobas/209/head -> origin/gh/guilhermeleobas/209/head 2025-09-07T07:34:58.2645249Z * [new branch] gh/guilhermeleobas/209/orig -> origin/gh/guilhermeleobas/209/orig 2025-09-07T07:34:58.2645415Z * [new branch] gh/guilhermeleobas/210/base -> origin/gh/guilhermeleobas/210/base 2025-09-07T07:34:58.2645579Z * [new branch] gh/guilhermeleobas/210/head -> origin/gh/guilhermeleobas/210/head 2025-09-07T07:34:58.2645740Z * [new branch] gh/guilhermeleobas/210/orig -> origin/gh/guilhermeleobas/210/orig 2025-09-07T07:34:58.2645894Z * [new branch] gh/guilhermeleobas/211/base -> origin/gh/guilhermeleobas/211/base 2025-09-07T07:34:58.2646058Z * [new branch] gh/guilhermeleobas/211/head -> origin/gh/guilhermeleobas/211/head 2025-09-07T07:34:58.2646213Z * [new branch] gh/guilhermeleobas/211/orig -> origin/gh/guilhermeleobas/211/orig 2025-09-07T07:34:58.2646374Z * [new branch] gh/guilhermeleobas/214/base -> origin/gh/guilhermeleobas/214/base 2025-09-07T07:34:58.2646532Z * [new branch] gh/guilhermeleobas/214/head -> origin/gh/guilhermeleobas/214/head 2025-09-07T07:34:58.2646842Z * [new branch] gh/guilhermeleobas/214/orig -> origin/gh/guilhermeleobas/214/orig 2025-09-07T07:34:58.2647045Z * [new branch] gh/guilhermeleobas/215/base -> origin/gh/guilhermeleobas/215/base 2025-09-07T07:34:58.2647203Z * [new branch] gh/guilhermeleobas/215/head -> origin/gh/guilhermeleobas/215/head 2025-09-07T07:34:58.2647364Z * [new branch] gh/guilhermeleobas/215/orig -> origin/gh/guilhermeleobas/215/orig 2025-09-07T07:34:58.2647518Z * [new branch] gh/guilhermeleobas/216/base -> origin/gh/guilhermeleobas/216/base 2025-09-07T07:34:58.2647679Z * [new branch] gh/guilhermeleobas/216/head -> origin/gh/guilhermeleobas/216/head 2025-09-07T07:34:58.2647842Z * [new branch] gh/guilhermeleobas/216/orig -> origin/gh/guilhermeleobas/216/orig 2025-09-07T07:34:58.2648116Z * [new branch] gh/guilhermeleobas/217/base -> origin/gh/guilhermeleobas/217/base 2025-09-07T07:34:58.2648267Z * [new branch] gh/guilhermeleobas/217/head -> origin/gh/guilhermeleobas/217/head 2025-09-07T07:34:58.2648418Z * [new branch] gh/guilhermeleobas/217/orig -> origin/gh/guilhermeleobas/217/orig 2025-09-07T07:34:58.2648578Z * [new branch] gh/guilhermeleobas/219/base -> origin/gh/guilhermeleobas/219/base 2025-09-07T07:34:58.2648771Z * [new branch] gh/guilhermeleobas/219/head -> origin/gh/guilhermeleobas/219/head 2025-09-07T07:34:58.2648944Z * [new branch] gh/guilhermeleobas/219/orig -> origin/gh/guilhermeleobas/219/orig 2025-09-07T07:34:58.2649098Z * [new branch] gh/guilhermeleobas/220/base -> origin/gh/guilhermeleobas/220/base 2025-09-07T07:34:58.2649269Z * [new branch] gh/guilhermeleobas/220/head -> origin/gh/guilhermeleobas/220/head 2025-09-07T07:34:58.2649441Z * [new branch] gh/guilhermeleobas/220/orig -> origin/gh/guilhermeleobas/220/orig 2025-09-07T07:34:58.2649593Z * [new branch] gh/guilhermeleobas/221/base -> origin/gh/guilhermeleobas/221/base 2025-09-07T07:34:58.2649779Z * [new branch] gh/guilhermeleobas/221/head -> origin/gh/guilhermeleobas/221/head 2025-09-07T07:34:58.2652389Z * [new branch] gh/guilhermeleobas/221/orig -> origin/gh/guilhermeleobas/221/orig 2025-09-07T07:34:58.2657505Z * [new branch] gh/guilhermeleobas/222/base -> origin/gh/guilhermeleobas/222/base 2025-09-07T07:34:58.2661423Z * [new branch] gh/guilhermeleobas/222/head -> origin/gh/guilhermeleobas/222/head 2025-09-07T07:34:58.2666231Z * [new branch] gh/guilhermeleobas/222/orig -> origin/gh/guilhermeleobas/222/orig 2025-09-07T07:34:58.2670955Z * [new branch] gh/guilhermeleobas/223/base -> origin/gh/guilhermeleobas/223/base 2025-09-07T07:34:58.2674877Z * [new branch] gh/guilhermeleobas/223/head -> origin/gh/guilhermeleobas/223/head 2025-09-07T07:34:58.2675110Z * [new branch] gh/guilhermeleobas/223/orig -> origin/gh/guilhermeleobas/223/orig 2025-09-07T07:34:58.2675260Z * [new branch] gh/guilhermeleobas/224/base -> origin/gh/guilhermeleobas/224/base 2025-09-07T07:34:58.2675412Z * [new branch] gh/guilhermeleobas/224/head -> origin/gh/guilhermeleobas/224/head 2025-09-07T07:34:58.2675577Z * [new branch] gh/guilhermeleobas/224/orig -> origin/gh/guilhermeleobas/224/orig 2025-09-07T07:34:58.2675728Z * [new branch] gh/guilhermeleobas/225/base -> origin/gh/guilhermeleobas/225/base 2025-09-07T07:34:58.2675891Z * [new branch] gh/guilhermeleobas/225/head -> origin/gh/guilhermeleobas/225/head 2025-09-07T07:34:58.2676044Z * [new branch] gh/guilhermeleobas/225/orig -> origin/gh/guilhermeleobas/225/orig 2025-09-07T07:34:58.2676204Z * [new branch] gh/guilhermeleobas/226/base -> origin/gh/guilhermeleobas/226/base 2025-09-07T07:34:58.2676355Z * [new branch] gh/guilhermeleobas/226/head -> origin/gh/guilhermeleobas/226/head 2025-09-07T07:34:58.2676502Z * [new branch] gh/guilhermeleobas/226/orig -> origin/gh/guilhermeleobas/226/orig 2025-09-07T07:34:58.2676656Z * [new branch] gh/guilhermeleobas/227/base -> origin/gh/guilhermeleobas/227/base 2025-09-07T07:34:58.2676805Z * [new branch] gh/guilhermeleobas/227/head -> origin/gh/guilhermeleobas/227/head 2025-09-07T07:34:58.2676955Z * [new branch] gh/guilhermeleobas/227/orig -> origin/gh/guilhermeleobas/227/orig 2025-09-07T07:34:58.2677100Z * [new branch] gh/guilhermeleobas/228/base -> origin/gh/guilhermeleobas/228/base 2025-09-07T07:34:58.2677247Z * [new branch] gh/guilhermeleobas/228/head -> origin/gh/guilhermeleobas/228/head 2025-09-07T07:34:58.2677401Z * [new branch] gh/guilhermeleobas/228/orig -> origin/gh/guilhermeleobas/228/orig 2025-09-07T07:34:58.2677708Z * [new branch] gh/guilhermeleobas/229/base -> origin/gh/guilhermeleobas/229/base 2025-09-07T07:34:58.2677868Z * [new branch] gh/guilhermeleobas/229/head -> origin/gh/guilhermeleobas/229/head 2025-09-07T07:34:58.2678017Z * [new branch] gh/guilhermeleobas/229/orig -> origin/gh/guilhermeleobas/229/orig 2025-09-07T07:34:58.2678172Z * [new branch] gh/guilhermeleobas/230/base -> origin/gh/guilhermeleobas/230/base 2025-09-07T07:34:58.2678329Z * [new branch] gh/guilhermeleobas/230/head -> origin/gh/guilhermeleobas/230/head 2025-09-07T07:34:58.2678480Z * [new branch] gh/guilhermeleobas/230/orig -> origin/gh/guilhermeleobas/230/orig 2025-09-07T07:34:58.2678636Z * [new branch] gh/guilhermeleobas/231/base -> origin/gh/guilhermeleobas/231/base 2025-09-07T07:34:58.2678784Z * [new branch] gh/guilhermeleobas/231/head -> origin/gh/guilhermeleobas/231/head 2025-09-07T07:34:58.2678940Z * [new branch] gh/guilhermeleobas/231/orig -> origin/gh/guilhermeleobas/231/orig 2025-09-07T07:34:58.2679106Z * [new branch] gh/guilhermeleobas/232/base -> origin/gh/guilhermeleobas/232/base 2025-09-07T07:34:58.2679259Z * [new branch] gh/guilhermeleobas/232/head -> origin/gh/guilhermeleobas/232/head 2025-09-07T07:34:58.2679405Z * [new branch] gh/guilhermeleobas/232/orig -> origin/gh/guilhermeleobas/232/orig 2025-09-07T07:34:58.2679598Z * [new branch] gh/guilhermeleobas/233/base -> origin/gh/guilhermeleobas/233/base 2025-09-07T07:34:58.2679757Z * [new branch] gh/guilhermeleobas/233/head -> origin/gh/guilhermeleobas/233/head 2025-09-07T07:34:58.2679905Z * [new branch] gh/guilhermeleobas/233/orig -> origin/gh/guilhermeleobas/233/orig 2025-09-07T07:34:58.2680059Z * [new branch] gh/guilhermeleobas/234/base -> origin/gh/guilhermeleobas/234/base 2025-09-07T07:34:58.2680205Z * [new branch] gh/guilhermeleobas/234/head -> origin/gh/guilhermeleobas/234/head 2025-09-07T07:34:58.2680359Z * [new branch] gh/guilhermeleobas/234/orig -> origin/gh/guilhermeleobas/234/orig 2025-09-07T07:34:58.2680504Z * [new branch] gh/guilhermeleobas/235/base -> origin/gh/guilhermeleobas/235/base 2025-09-07T07:34:58.2680652Z * [new branch] gh/guilhermeleobas/235/head -> origin/gh/guilhermeleobas/235/head 2025-09-07T07:34:58.2680809Z * [new branch] gh/guilhermeleobas/235/orig -> origin/gh/guilhermeleobas/235/orig 2025-09-07T07:34:58.2680953Z * [new branch] gh/guilhermeleobas/236/base -> origin/gh/guilhermeleobas/236/base 2025-09-07T07:34:58.2681106Z * [new branch] gh/guilhermeleobas/236/head -> origin/gh/guilhermeleobas/236/head 2025-09-07T07:34:58.2681256Z * [new branch] gh/guilhermeleobas/236/orig -> origin/gh/guilhermeleobas/236/orig 2025-09-07T07:34:58.2681410Z * [new branch] gh/guilhermeleobas/237/base -> origin/gh/guilhermeleobas/237/base 2025-09-07T07:34:58.2681567Z * [new branch] gh/guilhermeleobas/237/head -> origin/gh/guilhermeleobas/237/head 2025-09-07T07:34:58.2681720Z * [new branch] gh/guilhermeleobas/237/orig -> origin/gh/guilhermeleobas/237/orig 2025-09-07T07:34:58.2681877Z * [new branch] gh/guilhermeleobas/238/base -> origin/gh/guilhermeleobas/238/base 2025-09-07T07:34:58.2682030Z * [new branch] gh/guilhermeleobas/238/head -> origin/gh/guilhermeleobas/238/head 2025-09-07T07:34:58.2682368Z * [new branch] gh/guilhermeleobas/238/orig -> origin/gh/guilhermeleobas/238/orig 2025-09-07T07:34:58.2683010Z * [new branch] gh/guilhermeleobas/239/base -> origin/gh/guilhermeleobas/239/base 2025-09-07T07:34:58.2683631Z * [new branch] gh/guilhermeleobas/239/head -> origin/gh/guilhermeleobas/239/head 2025-09-07T07:34:58.2684872Z * [new branch] gh/guilhermeleobas/239/orig -> origin/gh/guilhermeleobas/239/orig 2025-09-07T07:34:58.2685443Z * [new branch] gh/guilhermeleobas/240/base -> origin/gh/guilhermeleobas/240/base 2025-09-07T07:34:58.2685819Z * [new branch] gh/guilhermeleobas/240/head -> origin/gh/guilhermeleobas/240/head 2025-09-07T07:34:58.2687304Z * [new branch] gh/guilhermeleobas/240/orig -> origin/gh/guilhermeleobas/240/orig 2025-09-07T07:34:58.2687834Z * [new branch] gh/guilhermeleobas/241/base -> origin/gh/guilhermeleobas/241/base 2025-09-07T07:34:58.2689181Z * [new branch] gh/guilhermeleobas/241/head -> origin/gh/guilhermeleobas/241/head 2025-09-07T07:34:58.2689343Z * [new branch] gh/guilhermeleobas/241/orig -> origin/gh/guilhermeleobas/241/orig 2025-09-07T07:34:58.2691395Z * [new branch] gh/guilhermeleobas/242/base -> origin/gh/guilhermeleobas/242/base 2025-09-07T07:34:58.2691782Z * [new branch] gh/guilhermeleobas/242/head -> origin/gh/guilhermeleobas/242/head 2025-09-07T07:34:58.2692023Z * [new branch] gh/guilhermeleobas/242/orig -> origin/gh/guilhermeleobas/242/orig 2025-09-07T07:34:58.2692378Z * [new branch] gh/guilhermeleobas/243/base -> origin/gh/guilhermeleobas/243/base 2025-09-07T07:34:58.2692680Z * [new branch] gh/guilhermeleobas/243/head -> origin/gh/guilhermeleobas/243/head 2025-09-07T07:34:58.2694225Z * [new branch] gh/guilhermeleobas/243/orig -> origin/gh/guilhermeleobas/243/orig 2025-09-07T07:34:58.2694701Z * [new branch] gh/guilhermeleobas/244/base -> origin/gh/guilhermeleobas/244/base 2025-09-07T07:34:58.2694999Z * [new branch] gh/guilhermeleobas/244/head -> origin/gh/guilhermeleobas/244/head 2025-09-07T07:34:58.2697777Z * [new branch] gh/guilhermeleobas/244/orig -> origin/gh/guilhermeleobas/244/orig 2025-09-07T07:34:58.2698130Z * [new branch] gh/guilhermeleobas/245/base -> origin/gh/guilhermeleobas/245/base 2025-09-07T07:34:58.2698414Z * [new branch] gh/guilhermeleobas/245/head -> origin/gh/guilhermeleobas/245/head 2025-09-07T07:34:58.2698698Z * [new branch] gh/guilhermeleobas/245/orig -> origin/gh/guilhermeleobas/245/orig 2025-09-07T07:34:58.2698878Z * [new branch] gh/guilhermeleobas/73/base -> origin/gh/guilhermeleobas/73/base 2025-09-07T07:34:58.2699312Z * [new branch] gh/guilhermeleobas/73/head -> origin/gh/guilhermeleobas/73/head 2025-09-07T07:34:58.2700297Z * [new branch] gh/guilhermeleobas/73/orig -> origin/gh/guilhermeleobas/73/orig 2025-09-07T07:34:58.2705176Z * [new branch] gh/henrylhtsang/140/base -> origin/gh/henrylhtsang/140/base 2025-09-07T07:34:58.2705353Z * [new branch] gh/henrylhtsang/140/head -> origin/gh/henrylhtsang/140/head 2025-09-07T07:34:58.2705516Z * [new branch] gh/henrylhtsang/140/orig -> origin/gh/henrylhtsang/140/orig 2025-09-07T07:34:58.2705666Z * [new branch] gh/henrylhtsang/141/base -> origin/gh/henrylhtsang/141/base 2025-09-07T07:34:58.2705829Z * [new branch] gh/henrylhtsang/141/head -> origin/gh/henrylhtsang/141/head 2025-09-07T07:34:58.2705974Z * [new branch] gh/henrylhtsang/141/orig -> origin/gh/henrylhtsang/141/orig 2025-09-07T07:34:58.2706116Z * [new branch] gh/henrylhtsang/142/base -> origin/gh/henrylhtsang/142/base 2025-09-07T07:34:58.2706327Z * [new branch] gh/henrylhtsang/142/head -> origin/gh/henrylhtsang/142/head 2025-09-07T07:34:58.2707512Z * [new branch] gh/henrylhtsang/142/orig -> origin/gh/henrylhtsang/142/orig 2025-09-07T07:34:58.2707936Z * [new branch] gh/henrylhtsang/143/base -> origin/gh/henrylhtsang/143/base 2025-09-07T07:34:58.2711667Z * [new branch] gh/henrylhtsang/143/head -> origin/gh/henrylhtsang/143/head 2025-09-07T07:34:58.2712006Z * [new branch] gh/henrylhtsang/143/orig -> origin/gh/henrylhtsang/143/orig 2025-09-07T07:34:58.2712240Z * [new branch] gh/henrylhtsang/144/base -> origin/gh/henrylhtsang/144/base 2025-09-07T07:34:58.2712699Z * [new branch] gh/henrylhtsang/144/head -> origin/gh/henrylhtsang/144/head 2025-09-07T07:34:58.2712894Z * [new branch] gh/henrylhtsang/144/orig -> origin/gh/henrylhtsang/144/orig 2025-09-07T07:34:58.2713586Z * [new branch] gh/henrylhtsang/145/base -> origin/gh/henrylhtsang/145/base 2025-09-07T07:34:58.2713785Z * [new branch] gh/henrylhtsang/145/head -> origin/gh/henrylhtsang/145/head 2025-09-07T07:34:58.2713947Z * [new branch] gh/henrylhtsang/145/orig -> origin/gh/henrylhtsang/145/orig 2025-09-07T07:34:58.2714495Z * [new branch] gh/henrylhtsang/146/base -> origin/gh/henrylhtsang/146/base 2025-09-07T07:34:58.2718320Z * [new branch] gh/henrylhtsang/146/head -> origin/gh/henrylhtsang/146/head 2025-09-07T07:34:58.2718654Z * [new branch] gh/henrylhtsang/146/orig -> origin/gh/henrylhtsang/146/orig 2025-09-07T07:34:58.2718917Z * [new branch] gh/henrylhtsang/147/base -> origin/gh/henrylhtsang/147/base 2025-09-07T07:34:58.2719124Z * [new branch] gh/henrylhtsang/147/head -> origin/gh/henrylhtsang/147/head 2025-09-07T07:34:58.2719293Z * [new branch] gh/henrylhtsang/147/orig -> origin/gh/henrylhtsang/147/orig 2025-09-07T07:34:58.2719530Z * [new branch] gh/henrylhtsang/148/base -> origin/gh/henrylhtsang/148/base 2025-09-07T07:34:58.2720394Z * [new branch] gh/henrylhtsang/148/head -> origin/gh/henrylhtsang/148/head 2025-09-07T07:34:58.2720592Z * [new branch] gh/henrylhtsang/148/orig -> origin/gh/henrylhtsang/148/orig 2025-09-07T07:34:58.2721413Z * [new branch] gh/henrylhtsang/149/base -> origin/gh/henrylhtsang/149/base 2025-09-07T07:34:58.2721827Z * [new branch] gh/henrylhtsang/149/head -> origin/gh/henrylhtsang/149/head 2025-09-07T07:34:58.2722778Z * [new branch] gh/henrylhtsang/149/orig -> origin/gh/henrylhtsang/149/orig 2025-09-07T07:34:58.2726950Z * [new branch] gh/huydhn/1/next -> origin/gh/huydhn/1/next 2025-09-07T07:34:58.2727150Z * [new branch] gh/huydhn/2/next -> origin/gh/huydhn/2/next 2025-09-07T07:34:58.2727295Z * [new branch] gh/huydhn/3/next -> origin/gh/huydhn/3/next 2025-09-07T07:34:58.2727450Z * [new branch] gh/huydhn/4/next -> origin/gh/huydhn/4/next 2025-09-07T07:34:58.2727749Z * [new branch] gh/huydhn/5/next -> origin/gh/huydhn/5/next 2025-09-07T07:34:58.2731615Z * [new branch] gh/huydhn/6/next -> origin/gh/huydhn/6/next 2025-09-07T07:34:58.2731929Z * [new branch] gh/int3/97/base -> origin/gh/int3/97/base 2025-09-07T07:34:58.2732076Z * [new branch] gh/int3/97/head -> origin/gh/int3/97/head 2025-09-07T07:34:58.2732301Z * [new branch] gh/isuruf/101/base -> origin/gh/isuruf/101/base 2025-09-07T07:34:58.2732449Z * [new branch] gh/isuruf/101/head -> origin/gh/isuruf/101/head 2025-09-07T07:34:58.2732690Z * [new branch] gh/isuruf/141/base -> origin/gh/isuruf/141/base 2025-09-07T07:34:58.2736743Z * [new branch] gh/isuruf/141/head -> origin/gh/isuruf/141/head 2025-09-07T07:34:58.2737096Z * [new branch] gh/isuruf/141/orig -> origin/gh/isuruf/141/orig 2025-09-07T07:34:58.2737386Z * [new branch] gh/isuruf/142/base -> origin/gh/isuruf/142/base 2025-09-07T07:34:58.2737547Z * [new branch] gh/isuruf/142/head -> origin/gh/isuruf/142/head 2025-09-07T07:34:58.2737760Z * [new branch] gh/isuruf/142/orig -> origin/gh/isuruf/142/orig 2025-09-07T07:34:58.2737931Z * [new branch] gh/isuruf/143/base -> origin/gh/isuruf/143/base 2025-09-07T07:34:58.2738064Z * [new branch] gh/isuruf/143/head -> origin/gh/isuruf/143/head 2025-09-07T07:34:58.2738606Z * [new branch] gh/isuruf/143/orig -> origin/gh/isuruf/143/orig 2025-09-07T07:34:58.2744033Z * [new branch] gh/isuruf/144/base -> origin/gh/isuruf/144/base 2025-09-07T07:34:58.2744371Z * [new branch] gh/isuruf/144/head -> origin/gh/isuruf/144/head 2025-09-07T07:34:58.2749109Z * [new branch] gh/isuruf/144/orig -> origin/gh/isuruf/144/orig 2025-09-07T07:34:58.2751124Z * [new branch] gh/isuruf/145/base -> origin/gh/isuruf/145/base 2025-09-07T07:34:58.2751419Z * [new branch] gh/isuruf/145/head -> origin/gh/isuruf/145/head 2025-09-07T07:34:58.2751588Z * [new branch] gh/isuruf/145/orig -> origin/gh/isuruf/145/orig 2025-09-07T07:34:58.2751720Z * [new branch] gh/isuruf/146/base -> origin/gh/isuruf/146/base 2025-09-07T07:34:58.2752005Z * [new branch] gh/isuruf/146/head -> origin/gh/isuruf/146/head 2025-09-07T07:34:58.2752259Z * [new branch] gh/isuruf/146/orig -> origin/gh/isuruf/146/orig 2025-09-07T07:34:58.2752482Z * [new branch] gh/isuruf/81/base -> origin/gh/isuruf/81/base 2025-09-07T07:34:58.2752627Z * [new branch] gh/isuruf/81/head -> origin/gh/isuruf/81/head 2025-09-07T07:34:58.2753472Z * [new branch] gh/isuruf/81/orig -> origin/gh/isuruf/81/orig 2025-09-07T07:34:58.2753677Z * [new branch] gh/jamesjwu/150/base -> origin/gh/jamesjwu/150/base 2025-09-07T07:34:58.2753820Z * [new branch] gh/jamesjwu/150/head -> origin/gh/jamesjwu/150/head 2025-09-07T07:34:58.2753959Z * [new branch] gh/jamesjwu/150/orig -> origin/gh/jamesjwu/150/orig 2025-09-07T07:34:58.2759929Z * [new branch] gh/jamesjwu/154/base -> origin/gh/jamesjwu/154/base 2025-09-07T07:34:58.2760200Z * [new branch] gh/jamesjwu/154/head -> origin/gh/jamesjwu/154/head 2025-09-07T07:34:58.2760364Z * [new branch] gh/jamesjwu/154/orig -> origin/gh/jamesjwu/154/orig 2025-09-07T07:34:58.2760536Z * [new branch] gh/jamesjwu/155/base -> origin/gh/jamesjwu/155/base 2025-09-07T07:34:58.2760700Z * [new branch] gh/jamesjwu/155/head -> origin/gh/jamesjwu/155/head 2025-09-07T07:34:58.2760859Z * [new branch] gh/jamesjwu/155/orig -> origin/gh/jamesjwu/155/orig 2025-09-07T07:34:58.2763758Z * [new branch] gh/jamesjwu/159/base -> origin/gh/jamesjwu/159/base 2025-09-07T07:34:58.2765557Z * [new branch] gh/jamesjwu/159/head -> origin/gh/jamesjwu/159/head 2025-09-07T07:34:58.2765730Z * [new branch] gh/jamesjwu/159/orig -> origin/gh/jamesjwu/159/orig 2025-09-07T07:34:58.2766068Z * [new branch] gh/jamesjwu/163/base -> origin/gh/jamesjwu/163/base 2025-09-07T07:34:58.2766241Z * [new branch] gh/jamesjwu/163/head -> origin/gh/jamesjwu/163/head 2025-09-07T07:34:58.2766399Z * [new branch] gh/jamesjwu/163/orig -> origin/gh/jamesjwu/163/orig 2025-09-07T07:34:58.2766561Z * [new branch] gh/jamesjwu/171/base -> origin/gh/jamesjwu/171/base 2025-09-07T07:34:58.2766944Z * [new branch] gh/jamesjwu/171/head -> origin/gh/jamesjwu/171/head 2025-09-07T07:34:58.2767116Z * [new branch] gh/jamesjwu/171/orig -> origin/gh/jamesjwu/171/orig 2025-09-07T07:34:58.2767260Z * [new branch] gh/jamesjwu/176/base -> origin/gh/jamesjwu/176/base 2025-09-07T07:34:58.2767420Z * [new branch] gh/jamesjwu/176/head -> origin/gh/jamesjwu/176/head 2025-09-07T07:34:58.2767570Z * [new branch] gh/jamesjwu/176/orig -> origin/gh/jamesjwu/176/orig 2025-09-07T07:34:58.2767733Z * [new branch] gh/jamesjwu/181/base -> origin/gh/jamesjwu/181/base 2025-09-07T07:34:58.2767965Z * [new branch] gh/jamesjwu/181/head -> origin/gh/jamesjwu/181/head 2025-09-07T07:34:58.2768125Z * [new branch] gh/jamesjwu/181/orig -> origin/gh/jamesjwu/181/orig 2025-09-07T07:34:58.2774456Z * [new branch] gh/jamesjwu/182/base -> origin/gh/jamesjwu/182/base 2025-09-07T07:34:58.2774651Z * [new branch] gh/jamesjwu/182/head -> origin/gh/jamesjwu/182/head 2025-09-07T07:34:58.2774795Z * [new branch] gh/jamesjwu/182/orig -> origin/gh/jamesjwu/182/orig 2025-09-07T07:34:58.2774927Z * [new branch] gh/jamesjwu/183/base -> origin/gh/jamesjwu/183/base 2025-09-07T07:34:58.2775063Z * [new branch] gh/jamesjwu/183/head -> origin/gh/jamesjwu/183/head 2025-09-07T07:34:58.2775192Z * [new branch] gh/jamesjwu/183/orig -> origin/gh/jamesjwu/183/orig 2025-09-07T07:34:58.2775328Z * [new branch] gh/jamesjwu/184/base -> origin/gh/jamesjwu/184/base 2025-09-07T07:34:58.2780557Z * [new branch] gh/jamesjwu/184/head -> origin/gh/jamesjwu/184/head 2025-09-07T07:34:58.2780903Z * [new branch] gh/jamesjwu/184/orig -> origin/gh/jamesjwu/184/orig 2025-09-07T07:34:58.2781142Z * [new branch] gh/jamesjwu/185/base -> origin/gh/jamesjwu/185/base 2025-09-07T07:34:58.2781527Z * [new branch] gh/jamesjwu/185/head -> origin/gh/jamesjwu/185/head 2025-09-07T07:34:58.2781818Z * [new branch] gh/jamesjwu/185/orig -> origin/gh/jamesjwu/185/orig 2025-09-07T07:34:58.2782457Z * [new branch] gh/jamesjwu/186/base -> origin/gh/jamesjwu/186/base 2025-09-07T07:34:58.2782635Z * [new branch] gh/jamesjwu/186/head -> origin/gh/jamesjwu/186/head 2025-09-07T07:34:58.2782790Z * [new branch] gh/jamesjwu/186/orig -> origin/gh/jamesjwu/186/orig 2025-09-07T07:34:58.2783110Z * [new branch] gh/jamesjwu/187/base -> origin/gh/jamesjwu/187/base 2025-09-07T07:34:58.2783256Z * [new branch] gh/jamesjwu/187/head -> origin/gh/jamesjwu/187/head 2025-09-07T07:34:58.2783403Z * [new branch] gh/jamesjwu/187/orig -> origin/gh/jamesjwu/187/orig 2025-09-07T07:34:58.2783547Z * [new branch] gh/jamesjwu/188/base -> origin/gh/jamesjwu/188/base 2025-09-07T07:34:58.2783693Z * [new branch] gh/jamesjwu/188/head -> origin/gh/jamesjwu/188/head 2025-09-07T07:34:58.2783834Z * [new branch] gh/jamesjwu/188/orig -> origin/gh/jamesjwu/188/orig 2025-09-07T07:34:58.2783969Z * [new branch] gh/jamesjwu/189/base -> origin/gh/jamesjwu/189/base 2025-09-07T07:34:58.2784101Z * [new branch] gh/jamesjwu/189/head -> origin/gh/jamesjwu/189/head 2025-09-07T07:34:58.2784248Z * [new branch] gh/jamesjwu/189/orig -> origin/gh/jamesjwu/189/orig 2025-09-07T07:34:58.2789252Z * [new branch] gh/jamesjwu/190/base -> origin/gh/jamesjwu/190/base 2025-09-07T07:34:58.2789594Z * [new branch] gh/jamesjwu/190/head -> origin/gh/jamesjwu/190/head 2025-09-07T07:34:58.2789823Z * [new branch] gh/jamesjwu/190/orig -> origin/gh/jamesjwu/190/orig 2025-09-07T07:34:58.2790039Z * [new branch] gh/jamesjwu/52/base -> origin/gh/jamesjwu/52/base 2025-09-07T07:34:58.2790223Z * [new branch] gh/jamesjwu/52/head -> origin/gh/jamesjwu/52/head 2025-09-07T07:34:58.2794652Z * [new branch] gh/jamesjwu/53/base -> origin/gh/jamesjwu/53/base 2025-09-07T07:34:58.2794992Z * [new branch] gh/jamesjwu/53/head -> origin/gh/jamesjwu/53/head 2025-09-07T07:34:58.2795369Z * [new branch] gh/jamesjwu/54/base -> origin/gh/jamesjwu/54/base 2025-09-07T07:34:58.2795544Z * [new branch] gh/jamesjwu/54/head -> origin/gh/jamesjwu/54/head 2025-09-07T07:34:58.2795976Z * [new branch] gh/jamesjwu/55/base -> origin/gh/jamesjwu/55/base 2025-09-07T07:34:58.2796132Z * [new branch] gh/jamesjwu/55/head -> origin/gh/jamesjwu/55/head 2025-09-07T07:34:58.2796789Z * [new branch] gh/jamesjwu/56/base -> origin/gh/jamesjwu/56/base 2025-09-07T07:34:58.2796982Z * [new branch] gh/jamesjwu/56/head -> origin/gh/jamesjwu/56/head 2025-09-07T07:34:58.2797138Z * [new branch] gh/jamesjwu/57/base -> origin/gh/jamesjwu/57/base 2025-09-07T07:34:58.2797272Z * [new branch] gh/jamesjwu/57/head -> origin/gh/jamesjwu/57/head 2025-09-07T07:34:58.2797406Z * [new branch] gh/jamesjwu/58/base -> origin/gh/jamesjwu/58/base 2025-09-07T07:34:58.2797546Z * [new branch] gh/jamesjwu/58/head -> origin/gh/jamesjwu/58/head 2025-09-07T07:34:58.2797688Z * [new branch] gh/jamesjwu/59/base -> origin/gh/jamesjwu/59/base 2025-09-07T07:34:58.2797851Z * [new branch] gh/jamesjwu/59/head -> origin/gh/jamesjwu/59/head 2025-09-07T07:34:58.2797986Z * [new branch] gh/jamesjwu/60/base -> origin/gh/jamesjwu/60/base 2025-09-07T07:34:58.2804163Z * [new branch] gh/jamesjwu/60/head -> origin/gh/jamesjwu/60/head 2025-09-07T07:34:58.2804536Z * [new branch] gh/jamesjwu/61/base -> origin/gh/jamesjwu/61/base 2025-09-07T07:34:58.2804694Z * [new branch] gh/jamesjwu/61/head -> origin/gh/jamesjwu/61/head 2025-09-07T07:34:58.2804849Z * [new branch] gh/jamesjwu/62/base -> origin/gh/jamesjwu/62/base 2025-09-07T07:34:58.2804993Z * [new branch] gh/jamesjwu/62/head -> origin/gh/jamesjwu/62/head 2025-09-07T07:34:58.2805143Z * [new branch] gh/jamesjwu/63/base -> origin/gh/jamesjwu/63/base 2025-09-07T07:34:58.2805283Z * [new branch] gh/jamesjwu/63/head -> origin/gh/jamesjwu/63/head 2025-09-07T07:34:58.2805440Z * [new branch] gh/jamesjwu/64/base -> origin/gh/jamesjwu/64/base 2025-09-07T07:34:58.2805581Z * [new branch] gh/jamesjwu/64/head -> origin/gh/jamesjwu/64/head 2025-09-07T07:34:58.2805721Z * [new branch] gh/jamesjwu/65/base -> origin/gh/jamesjwu/65/base 2025-09-07T07:34:58.2805885Z * [new branch] gh/jamesjwu/65/head -> origin/gh/jamesjwu/65/head 2025-09-07T07:34:58.2806076Z * [new branch] gh/janeyx99/165/base -> origin/gh/janeyx99/165/base 2025-09-07T07:34:58.2807213Z * [new branch] gh/janeyx99/165/head -> origin/gh/janeyx99/165/head 2025-09-07T07:34:58.2813659Z * [new branch] gh/janeyx99/165/orig -> origin/gh/janeyx99/165/orig 2025-09-07T07:34:58.2815949Z * [new branch] gh/janeyx99/201/base -> origin/gh/janeyx99/201/base 2025-09-07T07:34:58.2816229Z * [new branch] gh/janeyx99/201/head -> origin/gh/janeyx99/201/head 2025-09-07T07:34:58.2821329Z * [new branch] gh/janeyx99/201/orig -> origin/gh/janeyx99/201/orig 2025-09-07T07:34:58.2821517Z * [new branch] gh/janeyx99/225/base -> origin/gh/janeyx99/225/base 2025-09-07T07:34:58.2821658Z * [new branch] gh/janeyx99/225/head -> origin/gh/janeyx99/225/head 2025-09-07T07:34:58.2821803Z * [new branch] gh/janeyx99/225/orig -> origin/gh/janeyx99/225/orig 2025-09-07T07:34:58.2821936Z * [new branch] gh/janeyx99/296/base -> origin/gh/janeyx99/296/base 2025-09-07T07:34:58.2822074Z * [new branch] gh/janeyx99/296/head -> origin/gh/janeyx99/296/head 2025-09-07T07:34:58.2822202Z * [new branch] gh/janeyx99/296/orig -> origin/gh/janeyx99/296/orig 2025-09-07T07:34:58.2822382Z * [new branch] gh/janeyx99/297/base -> origin/gh/janeyx99/297/base 2025-09-07T07:34:58.2822669Z * [new branch] gh/janeyx99/297/head -> origin/gh/janeyx99/297/head 2025-09-07T07:34:58.2822806Z * [new branch] gh/janeyx99/297/orig -> origin/gh/janeyx99/297/orig 2025-09-07T07:34:58.2822932Z * [new branch] gh/janeyx99/298/base -> origin/gh/janeyx99/298/base 2025-09-07T07:34:58.2823056Z * [new branch] gh/janeyx99/298/head -> origin/gh/janeyx99/298/head 2025-09-07T07:34:58.2823197Z * [new branch] gh/janeyx99/298/orig -> origin/gh/janeyx99/298/orig 2025-09-07T07:34:58.2823322Z * [new branch] gh/janeyx99/299/base -> origin/gh/janeyx99/299/base 2025-09-07T07:34:58.2823455Z * [new branch] gh/janeyx99/299/head -> origin/gh/janeyx99/299/head 2025-09-07T07:34:58.2823579Z * [new branch] gh/janeyx99/299/orig -> origin/gh/janeyx99/299/orig 2025-09-07T07:34:58.2823703Z * [new branch] gh/janeyx99/300/base -> origin/gh/janeyx99/300/base 2025-09-07T07:34:58.2823839Z * [new branch] gh/janeyx99/300/head -> origin/gh/janeyx99/300/head 2025-09-07T07:34:58.2825895Z * [new branch] gh/janeyx99/300/orig -> origin/gh/janeyx99/300/orig 2025-09-07T07:34:58.2826062Z * [new branch] gh/janeyx99/301/base -> origin/gh/janeyx99/301/base 2025-09-07T07:34:58.2826244Z * [new branch] gh/janeyx99/301/head -> origin/gh/janeyx99/301/head 2025-09-07T07:34:58.2826390Z * [new branch] gh/janeyx99/301/orig -> origin/gh/janeyx99/301/orig 2025-09-07T07:34:58.2826529Z * [new branch] gh/janeyx99/302/base -> origin/gh/janeyx99/302/base 2025-09-07T07:34:58.2826987Z * [new branch] gh/janeyx99/302/head -> origin/gh/janeyx99/302/head 2025-09-07T07:34:58.2827152Z * [new branch] gh/janeyx99/303/base -> origin/gh/janeyx99/303/base 2025-09-07T07:34:58.2833374Z * [new branch] gh/janeyx99/303/head -> origin/gh/janeyx99/303/head 2025-09-07T07:34:58.2833591Z * [new branch] gh/janeyx99/88/base -> origin/gh/janeyx99/88/base 2025-09-07T07:34:58.2833747Z * [new branch] gh/janeyx99/88/head -> origin/gh/janeyx99/88/head 2025-09-07T07:34:58.2833891Z * [new branch] gh/janeyx99/88/orig -> origin/gh/janeyx99/88/orig 2025-09-07T07:34:58.2834049Z * [new branch] gh/jansel/360/base -> origin/gh/jansel/360/base 2025-09-07T07:34:58.2834196Z * [new branch] gh/jansel/360/head -> origin/gh/jansel/360/head 2025-09-07T07:34:58.2834324Z * [new branch] gh/jansel/451/base -> origin/gh/jansel/451/base 2025-09-07T07:34:58.2834463Z * [new branch] gh/jansel/451/head -> origin/gh/jansel/451/head 2025-09-07T07:34:58.2834593Z * [new branch] gh/jansel/451/orig -> origin/gh/jansel/451/orig 2025-09-07T07:34:58.2834731Z * [new branch] gh/jansel/462/base -> origin/gh/jansel/462/base 2025-09-07T07:34:58.2834860Z * [new branch] gh/jansel/462/head -> origin/gh/jansel/462/head 2025-09-07T07:34:58.2835192Z * [new branch] gh/jansel/462/orig -> origin/gh/jansel/462/orig 2025-09-07T07:34:58.2836065Z * [new branch] gh/jansel/531/base -> origin/gh/jansel/531/base 2025-09-07T07:34:58.2836252Z * [new branch] gh/jansel/531/head -> origin/gh/jansel/531/head 2025-09-07T07:34:58.2836745Z * [new branch] gh/jansel/531/orig -> origin/gh/jansel/531/orig 2025-09-07T07:34:58.2840023Z * [new branch] gh/jbschlosser/208/head -> origin/gh/jbschlosser/208/head 2025-09-07T07:34:58.2840356Z * [new branch] gh/jbschlosser/247/base -> origin/gh/jbschlosser/247/base 2025-09-07T07:34:58.2840601Z * [new branch] gh/jbschlosser/247/head -> origin/gh/jbschlosser/247/head 2025-09-07T07:34:58.2841121Z * [new branch] gh/jbschlosser/247/orig -> origin/gh/jbschlosser/247/orig 2025-09-07T07:34:58.2841825Z * [new branch] gh/jbschlosser/248/base -> origin/gh/jbschlosser/248/base 2025-09-07T07:34:58.2842165Z * [new branch] gh/jbschlosser/248/head -> origin/gh/jbschlosser/248/head 2025-09-07T07:34:58.2842375Z * [new branch] gh/jbschlosser/248/orig -> origin/gh/jbschlosser/248/orig 2025-09-07T07:34:58.2844375Z * [new branch] gh/jbschlosser/250/base -> origin/gh/jbschlosser/250/base 2025-09-07T07:34:58.2844593Z * [new branch] gh/jbschlosser/250/head -> origin/gh/jbschlosser/250/head 2025-09-07T07:34:58.2844777Z * [new branch] gh/jbschlosser/250/orig -> origin/gh/jbschlosser/250/orig 2025-09-07T07:34:58.2846744Z * [new branch] gh/jiayisunx/59/base -> origin/gh/jiayisunx/59/base 2025-09-07T07:34:58.2847079Z * [new branch] gh/jiayisunx/59/head -> origin/gh/jiayisunx/59/head 2025-09-07T07:34:58.2848493Z * [new branch] gh/jiayisunx/59/orig -> origin/gh/jiayisunx/59/orig 2025-09-07T07:34:58.2848638Z * [new branch] gh/jiayisunx/61/base -> origin/gh/jiayisunx/61/base 2025-09-07T07:34:58.2850470Z * [new branch] gh/jiayisunx/61/head -> origin/gh/jiayisunx/61/head 2025-09-07T07:34:58.2851026Z * [new branch] gh/jiayisunx/61/orig -> origin/gh/jiayisunx/61/orig 2025-09-07T07:34:58.2851348Z * [new branch] gh/jiayisunx/64/base -> origin/gh/jiayisunx/64/base 2025-09-07T07:34:58.2851621Z * [new branch] gh/jiayisunx/64/head -> origin/gh/jiayisunx/64/head 2025-09-07T07:34:58.2853213Z * [new branch] gh/jiayisunx/64/orig -> origin/gh/jiayisunx/64/orig 2025-09-07T07:34:58.2853394Z * [new branch] gh/jiayisunx/65/base -> origin/gh/jiayisunx/65/base 2025-09-07T07:34:58.2854053Z * [new branch] gh/jiayisunx/65/head -> origin/gh/jiayisunx/65/head 2025-09-07T07:34:58.2854643Z * [new branch] gh/jiayisunx/65/orig -> origin/gh/jiayisunx/65/orig 2025-09-07T07:34:58.2858830Z * [new branch] gh/jiayisunx/66/base -> origin/gh/jiayisunx/66/base 2025-09-07T07:34:58.2859017Z * [new branch] gh/jiayisunx/66/head -> origin/gh/jiayisunx/66/head 2025-09-07T07:34:58.2859210Z * [new branch] gh/jiayisunx/66/orig -> origin/gh/jiayisunx/66/orig 2025-09-07T07:34:58.2859378Z * [new branch] gh/jiayisunx/67/base -> origin/gh/jiayisunx/67/base 2025-09-07T07:34:58.2859529Z * [new branch] gh/jiayisunx/67/head -> origin/gh/jiayisunx/67/head 2025-09-07T07:34:58.2864867Z * [new branch] gh/jiayisunx/67/orig -> origin/gh/jiayisunx/67/orig 2025-09-07T07:34:58.2865072Z * [new branch] gh/jiayisunx/68/base -> origin/gh/jiayisunx/68/base 2025-09-07T07:34:58.2865240Z * [new branch] gh/jiayisunx/68/head -> origin/gh/jiayisunx/68/head 2025-09-07T07:34:58.2865391Z * [new branch] gh/jiayisunx/68/orig -> origin/gh/jiayisunx/68/orig 2025-09-07T07:34:58.2865532Z * [new branch] gh/jiayisunx/69/base -> origin/gh/jiayisunx/69/base 2025-09-07T07:34:58.2865693Z * [new branch] gh/jiayisunx/69/head -> origin/gh/jiayisunx/69/head 2025-09-07T07:34:58.2865848Z * [new branch] gh/jiayisunx/69/orig -> origin/gh/jiayisunx/69/orig 2025-09-07T07:34:58.2865995Z * [new branch] gh/jiayisunx/70/base -> origin/gh/jiayisunx/70/base 2025-09-07T07:34:58.2866141Z * [new branch] gh/jiayisunx/70/head -> origin/gh/jiayisunx/70/head 2025-09-07T07:34:58.2866286Z * [new branch] gh/jiayisunx/70/orig -> origin/gh/jiayisunx/70/orig 2025-09-07T07:34:58.2866952Z * [new branch] gh/jiayisunx/71/base -> origin/gh/jiayisunx/71/base 2025-09-07T07:34:58.2867520Z * [new branch] gh/jiayisunx/71/head -> origin/gh/jiayisunx/71/head 2025-09-07T07:34:58.2868824Z * [new branch] gh/jiayisunx/71/orig -> origin/gh/jiayisunx/71/orig 2025-09-07T07:34:58.2869000Z * [new branch] gh/jiayisunx/72/base -> origin/gh/jiayisunx/72/base 2025-09-07T07:34:58.2871151Z * [new branch] gh/jiayisunx/72/head -> origin/gh/jiayisunx/72/head 2025-09-07T07:34:58.2871341Z * [new branch] gh/jiayisunx/72/orig -> origin/gh/jiayisunx/72/orig 2025-09-07T07:34:58.2872402Z * [new branch] gh/jiayisunx/73/base -> origin/gh/jiayisunx/73/base 2025-09-07T07:34:58.2872975Z * [new branch] gh/jiayisunx/73/head -> origin/gh/jiayisunx/73/head 2025-09-07T07:34:58.2873226Z * [new branch] gh/jiayisunx/73/orig -> origin/gh/jiayisunx/73/orig 2025-09-07T07:34:58.2873592Z * [new branch] gh/jiayisunx/74/base -> origin/gh/jiayisunx/74/base 2025-09-07T07:34:58.2875426Z * [new branch] gh/jiayisunx/74/head -> origin/gh/jiayisunx/74/head 2025-09-07T07:34:58.2875616Z * [new branch] gh/jiayisunx/74/orig -> origin/gh/jiayisunx/74/orig 2025-09-07T07:34:58.2876293Z * [new branch] gh/jiayisunx/75/base -> origin/gh/jiayisunx/75/base 2025-09-07T07:34:58.2877326Z * [new branch] gh/jiayisunx/75/head -> origin/gh/jiayisunx/75/head 2025-09-07T07:34:58.2877490Z * [new branch] gh/jiayisunx/75/orig -> origin/gh/jiayisunx/75/orig 2025-09-07T07:34:58.2882271Z * [new branch] gh/jiayisunx/76/base -> origin/gh/jiayisunx/76/base 2025-09-07T07:34:58.2882463Z * [new branch] gh/jiayisunx/76/head -> origin/gh/jiayisunx/76/head 2025-09-07T07:34:58.2882623Z * [new branch] gh/jiayisunx/76/orig -> origin/gh/jiayisunx/76/orig 2025-09-07T07:34:58.2882793Z * [new branch] gh/jjwu@meta.com/1/base -> origin/gh/jjwu@meta.com/1/base 2025-09-07T07:34:58.2882985Z * [new branch] gh/jjwu@meta.com/1/head -> origin/gh/jjwu@meta.com/1/head 2025-09-07T07:34:58.2883149Z * [new branch] gh/justinchuby/111/base -> origin/gh/justinchuby/111/base 2025-09-07T07:34:58.2883389Z * [new branch] gh/justinchuby/111/head -> origin/gh/justinchuby/111/head 2025-09-07T07:34:58.2884472Z * [new branch] gh/justinchuby/111/orig -> origin/gh/justinchuby/111/orig 2025-09-07T07:34:58.2885144Z * [new branch] gh/justinchuby/112/base -> origin/gh/justinchuby/112/base 2025-09-07T07:34:58.2885735Z * [new branch] gh/justinchuby/112/head -> origin/gh/justinchuby/112/head 2025-09-07T07:34:58.2886624Z * [new branch] gh/justinchuby/112/orig -> origin/gh/justinchuby/112/orig 2025-09-07T07:34:58.2891940Z * [new branch] gh/justinchuby/113/base -> origin/gh/justinchuby/113/base 2025-09-07T07:34:58.2894415Z * [new branch] gh/justinchuby/113/head -> origin/gh/justinchuby/113/head 2025-09-07T07:34:58.2894619Z * [new branch] gh/justinchuby/113/orig -> origin/gh/justinchuby/113/orig 2025-09-07T07:34:58.2895009Z * [new branch] gh/justinchuby/114/base -> origin/gh/justinchuby/114/base 2025-09-07T07:34:58.2895184Z * [new branch] gh/justinchuby/114/head -> origin/gh/justinchuby/114/head 2025-09-07T07:34:58.2895337Z * [new branch] gh/justinchuby/114/orig -> origin/gh/justinchuby/114/orig 2025-09-07T07:34:58.2895481Z * [new branch] gh/justinchuby/115/base -> origin/gh/justinchuby/115/base 2025-09-07T07:34:58.2895633Z * [new branch] gh/justinchuby/115/head -> origin/gh/justinchuby/115/head 2025-09-07T07:34:58.2895793Z * [new branch] gh/justinchuby/115/orig -> origin/gh/justinchuby/115/orig 2025-09-07T07:34:58.2895967Z * [new branch] gh/karthickai/1/base -> origin/gh/karthickai/1/base 2025-09-07T07:34:58.2896457Z * [new branch] gh/karthickai/1/head -> origin/gh/karthickai/1/head 2025-09-07T07:34:58.2896633Z * [new branch] gh/karthickai/1/orig -> origin/gh/karthickai/1/orig 2025-09-07T07:34:58.2899866Z * [new branch] gh/karthickai/2/base -> origin/gh/karthickai/2/base 2025-09-07T07:34:58.2900099Z * [new branch] gh/karthickai/2/head -> origin/gh/karthickai/2/head 2025-09-07T07:34:58.2900264Z * [new branch] gh/karthickai/2/orig -> origin/gh/karthickai/2/orig 2025-09-07T07:34:58.2900440Z * [new branch] gh/kurtamohler/32/base -> origin/gh/kurtamohler/32/base 2025-09-07T07:34:58.2904962Z * [new branch] gh/kurtamohler/32/head -> origin/gh/kurtamohler/32/head 2025-09-07T07:34:58.2905184Z * [new branch] gh/kurtamohler/32/orig -> origin/gh/kurtamohler/32/orig 2025-09-07T07:34:58.2905395Z * [new branch] gh/kurtamohler/33/base -> origin/gh/kurtamohler/33/base 2025-09-07T07:34:58.2905562Z * [new branch] gh/kurtamohler/33/head -> origin/gh/kurtamohler/33/head 2025-09-07T07:34:58.2905750Z * [new branch] gh/kurtamohler/33/orig -> origin/gh/kurtamohler/33/orig 2025-09-07T07:34:58.2912050Z * [new branch] gh/kurtamohler/34/base -> origin/gh/kurtamohler/34/base 2025-09-07T07:34:58.2914664Z * [new branch] gh/kurtamohler/34/head -> origin/gh/kurtamohler/34/head 2025-09-07T07:34:58.2914845Z * [new branch] gh/kurtamohler/34/orig -> origin/gh/kurtamohler/34/orig 2025-09-07T07:34:58.2915001Z * [new branch] gh/kurtamohler/41/base -> origin/gh/kurtamohler/41/base 2025-09-07T07:34:58.2915152Z * [new branch] gh/kurtamohler/41/head -> origin/gh/kurtamohler/41/head 2025-09-07T07:34:58.2915302Z * [new branch] gh/kurtamohler/41/orig -> origin/gh/kurtamohler/41/orig 2025-09-07T07:34:58.2915474Z * [new branch] gh/kurtamohler/46/base -> origin/gh/kurtamohler/46/base 2025-09-07T07:34:58.2915631Z * [new branch] gh/kurtamohler/46/head -> origin/gh/kurtamohler/46/head 2025-09-07T07:34:58.2915777Z * [new branch] gh/kurtamohler/46/orig -> origin/gh/kurtamohler/46/orig 2025-09-07T07:34:58.2915918Z * [new branch] gh/kurtamohler/47/base -> origin/gh/kurtamohler/47/base 2025-09-07T07:34:58.2916065Z * [new branch] gh/kurtamohler/47/head -> origin/gh/kurtamohler/47/head 2025-09-07T07:34:58.2916205Z * [new branch] gh/kurtamohler/47/orig -> origin/gh/kurtamohler/47/orig 2025-09-07T07:34:58.2916346Z * [new branch] gh/kurtamohler/48/base -> origin/gh/kurtamohler/48/base 2025-09-07T07:34:58.2916494Z * [new branch] gh/kurtamohler/48/head -> origin/gh/kurtamohler/48/head 2025-09-07T07:34:58.2918062Z * [new branch] gh/kurtamohler/48/orig -> origin/gh/kurtamohler/48/orig 2025-09-07T07:34:58.2918227Z * [new branch] gh/kurtamohler/49/base -> origin/gh/kurtamohler/49/base 2025-09-07T07:34:58.2918368Z * [new branch] gh/kurtamohler/49/head -> origin/gh/kurtamohler/49/head 2025-09-07T07:34:58.2918517Z * [new branch] gh/kurtamohler/49/orig -> origin/gh/kurtamohler/49/orig 2025-09-07T07:34:58.2918660Z * [new branch] gh/kurtamohler/50/base -> origin/gh/kurtamohler/50/base 2025-09-07T07:34:58.2918802Z * [new branch] gh/kurtamohler/50/head -> origin/gh/kurtamohler/50/head 2025-09-07T07:34:58.2918950Z * [new branch] gh/kurtamohler/50/orig -> origin/gh/kurtamohler/50/orig 2025-09-07T07:34:58.2919101Z * [new branch] gh/kwen2501/130/base -> origin/gh/kwen2501/130/base 2025-09-07T07:34:58.2923499Z * [new branch] gh/kwen2501/130/head -> origin/gh/kwen2501/130/head 2025-09-07T07:34:58.2923705Z * [new branch] gh/kwen2501/130/orig -> origin/gh/kwen2501/130/orig 2025-09-07T07:34:58.2923857Z * [new branch] gh/kwen2501/15/base -> origin/gh/kwen2501/15/base 2025-09-07T07:34:58.2923999Z * [new branch] gh/kwen2501/15/head -> origin/gh/kwen2501/15/head 2025-09-07T07:34:58.2924147Z * [new branch] gh/kwen2501/156/base -> origin/gh/kwen2501/156/base 2025-09-07T07:34:58.2924289Z * [new branch] gh/kwen2501/156/head -> origin/gh/kwen2501/156/head 2025-09-07T07:34:58.2924425Z * [new branch] gh/kwen2501/156/orig -> origin/gh/kwen2501/156/orig 2025-09-07T07:34:58.2924577Z * [new branch] gh/kwen2501/170/base -> origin/gh/kwen2501/170/base 2025-09-07T07:34:58.2924729Z * [new branch] gh/kwen2501/170/head -> origin/gh/kwen2501/170/head 2025-09-07T07:34:58.2926145Z * [new branch] gh/kwen2501/186/base -> origin/gh/kwen2501/186/base 2025-09-07T07:34:58.2926457Z * [new branch] gh/kwen2501/186/head -> origin/gh/kwen2501/186/head 2025-09-07T07:34:58.2927747Z * [new branch] gh/kwen2501/186/orig -> origin/gh/kwen2501/186/orig 2025-09-07T07:34:58.2928498Z * [new branch] gh/kwen2501/187/base -> origin/gh/kwen2501/187/base 2025-09-07T07:34:58.2929724Z * [new branch] gh/kwen2501/187/head -> origin/gh/kwen2501/187/head 2025-09-07T07:34:58.2929868Z * [new branch] gh/kwen2501/187/orig -> origin/gh/kwen2501/187/orig 2025-09-07T07:34:58.2931534Z * [new branch] gh/kwen2501/188/base -> origin/gh/kwen2501/188/base 2025-09-07T07:34:58.2931740Z * [new branch] gh/kwen2501/188/head -> origin/gh/kwen2501/188/head 2025-09-07T07:34:58.2932156Z * [new branch] gh/kwen2501/188/orig -> origin/gh/kwen2501/188/orig 2025-09-07T07:34:58.2933060Z * [new branch] gh/kwen2501/194/base -> origin/gh/kwen2501/194/base 2025-09-07T07:34:58.2933318Z * [new branch] gh/kwen2501/194/head -> origin/gh/kwen2501/194/head 2025-09-07T07:34:58.2934478Z * [new branch] gh/kwen2501/194/orig -> origin/gh/kwen2501/194/orig 2025-09-07T07:34:58.2934891Z * [new branch] gh/kwen2501/199/base -> origin/gh/kwen2501/199/base 2025-09-07T07:34:58.2935904Z * [new branch] gh/kwen2501/199/head -> origin/gh/kwen2501/199/head 2025-09-07T07:34:58.2936187Z * [new branch] gh/kwen2501/199/orig -> origin/gh/kwen2501/199/orig 2025-09-07T07:34:58.2938909Z * [new branch] gh/kwen2501/200/base -> origin/gh/kwen2501/200/base 2025-09-07T07:34:58.2939173Z * [new branch] gh/kwen2501/200/head -> origin/gh/kwen2501/200/head 2025-09-07T07:34:58.2939331Z * [new branch] gh/kwen2501/200/orig -> origin/gh/kwen2501/200/orig 2025-09-07T07:34:58.2939874Z * [new branch] gh/kwen2501/201/base -> origin/gh/kwen2501/201/base 2025-09-07T07:34:58.2940103Z * [new branch] gh/kwen2501/201/head -> origin/gh/kwen2501/201/head 2025-09-07T07:34:58.2940436Z * [new branch] gh/kwen2501/201/orig -> origin/gh/kwen2501/201/orig 2025-09-07T07:34:58.2941728Z * [new branch] gh/kwen2501/203/base -> origin/gh/kwen2501/203/base 2025-09-07T07:34:58.2942066Z * [new branch] gh/kwen2501/203/head -> origin/gh/kwen2501/203/head 2025-09-07T07:34:58.2943054Z * [new branch] gh/kwen2501/203/orig -> origin/gh/kwen2501/203/orig 2025-09-07T07:34:58.2944395Z * [new branch] gh/kwen2501/204/base -> origin/gh/kwen2501/204/base 2025-09-07T07:34:58.2944557Z * [new branch] gh/kwen2501/204/head -> origin/gh/kwen2501/204/head 2025-09-07T07:34:58.2944700Z * [new branch] gh/kwen2501/204/orig -> origin/gh/kwen2501/204/orig 2025-09-07T07:34:58.2945569Z * [new branch] gh/kwen2501/205/base -> origin/gh/kwen2501/205/base 2025-09-07T07:34:58.2946698Z * [new branch] gh/kwen2501/205/head -> origin/gh/kwen2501/205/head 2025-09-07T07:34:58.2947310Z * [new branch] gh/kwen2501/205/orig -> origin/gh/kwen2501/205/orig 2025-09-07T07:34:58.2948233Z * [new branch] gh/kwen2501/206/base -> origin/gh/kwen2501/206/base 2025-09-07T07:34:58.2953446Z * [new branch] gh/kwen2501/206/head -> origin/gh/kwen2501/206/head 2025-09-07T07:34:58.2953652Z * [new branch] gh/kwen2501/206/orig -> origin/gh/kwen2501/206/orig 2025-09-07T07:34:58.2953850Z * [new branch] gh/kwen2501/207/base -> origin/gh/kwen2501/207/base 2025-09-07T07:34:58.2953990Z * [new branch] gh/kwen2501/207/head -> origin/gh/kwen2501/207/head 2025-09-07T07:34:58.2954140Z * [new branch] gh/kwen2501/207/orig -> origin/gh/kwen2501/207/orig 2025-09-07T07:34:58.2954304Z * [new branch] gh/kwen2501/208/base -> origin/gh/kwen2501/208/base 2025-09-07T07:34:58.2954443Z * [new branch] gh/kwen2501/208/head -> origin/gh/kwen2501/208/head 2025-09-07T07:34:58.2954568Z * [new branch] gh/kwen2501/208/orig -> origin/gh/kwen2501/208/orig 2025-09-07T07:34:58.2955325Z * [new branch] gh/kwen2501/209/base -> origin/gh/kwen2501/209/base 2025-09-07T07:34:58.2959208Z * [new branch] gh/kwen2501/209/head -> origin/gh/kwen2501/209/head 2025-09-07T07:34:58.2959393Z * [new branch] gh/kwen2501/209/orig -> origin/gh/kwen2501/209/orig 2025-09-07T07:34:58.2959598Z * [new branch] gh/kwen2501/210/base -> origin/gh/kwen2501/210/base 2025-09-07T07:34:58.2959735Z * [new branch] gh/kwen2501/210/head -> origin/gh/kwen2501/210/head 2025-09-07T07:34:58.2959877Z * [new branch] gh/kwen2501/210/orig -> origin/gh/kwen2501/210/orig 2025-09-07T07:34:58.2960027Z * [new branch] gh/kwen2501/211/base -> origin/gh/kwen2501/211/base 2025-09-07T07:34:58.2960183Z * [new branch] gh/kwen2501/211/head -> origin/gh/kwen2501/211/head 2025-09-07T07:34:58.2961505Z * [new branch] gh/kwen2501/212/base -> origin/gh/kwen2501/212/base 2025-09-07T07:34:58.2961888Z * [new branch] gh/kwen2501/212/head -> origin/gh/kwen2501/212/head 2025-09-07T07:34:58.2962516Z * [new branch] gh/kwen2501/212/orig -> origin/gh/kwen2501/212/orig 2025-09-07T07:34:58.2963985Z * [new branch] gh/kwen2501/213/base -> origin/gh/kwen2501/213/base 2025-09-07T07:34:58.2964228Z * [new branch] gh/kwen2501/213/head -> origin/gh/kwen2501/213/head 2025-09-07T07:34:58.2964857Z * [new branch] gh/kwen2501/213/orig -> origin/gh/kwen2501/213/orig 2025-09-07T07:34:58.2966174Z * [new branch] gh/kwen2501/214/base -> origin/gh/kwen2501/214/base 2025-09-07T07:34:58.2966473Z * [new branch] gh/kwen2501/214/head -> origin/gh/kwen2501/214/head 2025-09-07T07:34:58.2971068Z * [new branch] gh/kwen2501/214/orig -> origin/gh/kwen2501/214/orig 2025-09-07T07:34:58.2971259Z * [new branch] gh/kwen2501/215/base -> origin/gh/kwen2501/215/base 2025-09-07T07:34:58.2971416Z * [new branch] gh/kwen2501/215/head -> origin/gh/kwen2501/215/head 2025-09-07T07:34:58.2971691Z * [new branch] gh/kwen2501/215/orig -> origin/gh/kwen2501/215/orig 2025-09-07T07:34:58.2971847Z * [new branch] gh/kwen2501/216/base -> origin/gh/kwen2501/216/base 2025-09-07T07:34:58.2972017Z * [new branch] gh/kwen2501/216/head -> origin/gh/kwen2501/216/head 2025-09-07T07:34:58.2972157Z * [new branch] gh/kwen2501/216/orig -> origin/gh/kwen2501/216/orig 2025-09-07T07:34:58.2976589Z * [new branch] gh/kwen2501/217/base -> origin/gh/kwen2501/217/base 2025-09-07T07:34:58.2976719Z * [new branch] gh/kwen2501/217/head -> origin/gh/kwen2501/217/head 2025-09-07T07:34:58.2976846Z * [new branch] gh/kwen2501/217/orig -> origin/gh/kwen2501/217/orig 2025-09-07T07:34:58.2977052Z * [new branch] gh/kwen2501/218/base -> origin/gh/kwen2501/218/base 2025-09-07T07:34:58.2983596Z * [new branch] gh/kwen2501/218/head -> origin/gh/kwen2501/218/head 2025-09-07T07:34:58.2989169Z * [new branch] gh/kwen2501/218/orig -> origin/gh/kwen2501/218/orig 2025-09-07T07:34:58.2991554Z * [new branch] gh/kwen2501/219/base -> origin/gh/kwen2501/219/base 2025-09-07T07:34:58.2991723Z * [new branch] gh/kwen2501/219/head -> origin/gh/kwen2501/219/head 2025-09-07T07:34:58.2991926Z * [new branch] gh/kwen2501/219/orig -> origin/gh/kwen2501/219/orig 2025-09-07T07:34:58.2992087Z * [new branch] gh/kwen2501/220/base -> origin/gh/kwen2501/220/base 2025-09-07T07:34:58.2992233Z * [new branch] gh/kwen2501/220/head -> origin/gh/kwen2501/220/head 2025-09-07T07:34:58.2992369Z * [new branch] gh/kwen2501/220/orig -> origin/gh/kwen2501/220/orig 2025-09-07T07:34:58.2992671Z * [new branch] gh/kwen2501/221/base -> origin/gh/kwen2501/221/base 2025-09-07T07:34:58.2992807Z * [new branch] gh/kwen2501/221/head -> origin/gh/kwen2501/221/head 2025-09-07T07:34:58.2992937Z * [new branch] gh/kwen2501/221/orig -> origin/gh/kwen2501/221/orig 2025-09-07T07:34:58.2993082Z * [new branch] gh/kwen2501/222/base -> origin/gh/kwen2501/222/base 2025-09-07T07:34:58.2993212Z * [new branch] gh/kwen2501/222/head -> origin/gh/kwen2501/222/head 2025-09-07T07:34:58.2993348Z * [new branch] gh/kwen2501/222/orig -> origin/gh/kwen2501/222/orig 2025-09-07T07:34:58.2993485Z * [new branch] gh/kwen2501/223/base -> origin/gh/kwen2501/223/base 2025-09-07T07:34:58.2993622Z * [new branch] gh/kwen2501/223/head -> origin/gh/kwen2501/223/head 2025-09-07T07:34:58.2993749Z * [new branch] gh/kwen2501/223/orig -> origin/gh/kwen2501/223/orig 2025-09-07T07:34:58.2993881Z * [new branch] gh/kwen2501/224/base -> origin/gh/kwen2501/224/base 2025-09-07T07:34:58.2994018Z * [new branch] gh/kwen2501/224/head -> origin/gh/kwen2501/224/head 2025-09-07T07:34:58.2994143Z * [new branch] gh/kwen2501/224/orig -> origin/gh/kwen2501/224/orig 2025-09-07T07:34:58.2994278Z * [new branch] gh/kwen2501/225/base -> origin/gh/kwen2501/225/base 2025-09-07T07:34:58.2994405Z * [new branch] gh/kwen2501/225/head -> origin/gh/kwen2501/225/head 2025-09-07T07:34:58.2994542Z * [new branch] gh/kwen2501/225/orig -> origin/gh/kwen2501/225/orig 2025-09-07T07:34:58.2994684Z * [new branch] gh/kwen2501/226/base -> origin/gh/kwen2501/226/base 2025-09-07T07:34:58.2995268Z * [new branch] gh/kwen2501/226/head -> origin/gh/kwen2501/226/head 2025-09-07T07:34:58.2995437Z * [new branch] gh/kwen2501/226/orig -> origin/gh/kwen2501/226/orig 2025-09-07T07:34:58.2995627Z * [new branch] gh/kwen2501/227/base -> origin/gh/kwen2501/227/base 2025-09-07T07:34:58.2995781Z * [new branch] gh/kwen2501/227/head -> origin/gh/kwen2501/227/head 2025-09-07T07:34:58.2998831Z * [new branch] gh/kwen2501/227/orig -> origin/gh/kwen2501/227/orig 2025-09-07T07:34:58.2999112Z * [new branch] gh/kwen2501/228/base -> origin/gh/kwen2501/228/base 2025-09-07T07:34:58.2999263Z * [new branch] gh/kwen2501/228/head -> origin/gh/kwen2501/228/head 2025-09-07T07:34:58.2999714Z * [new branch] gh/kwen2501/228/orig -> origin/gh/kwen2501/228/orig 2025-09-07T07:34:58.2999865Z * [new branch] gh/kwen2501/229/base -> origin/gh/kwen2501/229/base 2025-09-07T07:34:58.3000092Z * [new branch] gh/kwen2501/229/head -> origin/gh/kwen2501/229/head 2025-09-07T07:34:58.3000857Z * [new branch] gh/kwen2501/229/orig -> origin/gh/kwen2501/229/orig 2025-09-07T07:34:58.3001102Z * [new branch] gh/kwen2501/230/base -> origin/gh/kwen2501/230/base 2025-09-07T07:34:58.3001782Z * [new branch] gh/kwen2501/230/head -> origin/gh/kwen2501/230/head 2025-09-07T07:34:58.3002416Z * [new branch] gh/kwen2501/230/orig -> origin/gh/kwen2501/230/orig 2025-09-07T07:34:58.3003808Z * [new branch] gh/kwen2501/231/base -> origin/gh/kwen2501/231/base 2025-09-07T07:34:58.3004078Z * [new branch] gh/kwen2501/231/head -> origin/gh/kwen2501/231/head 2025-09-07T07:34:58.3005144Z * [new branch] gh/kwen2501/231/orig -> origin/gh/kwen2501/231/orig 2025-09-07T07:34:58.3005774Z * [new branch] gh/kwen2501/232/base -> origin/gh/kwen2501/232/base 2025-09-07T07:34:58.3007173Z * [new branch] gh/kwen2501/232/head -> origin/gh/kwen2501/232/head 2025-09-07T07:34:58.3007327Z * [new branch] gh/kwen2501/232/orig -> origin/gh/kwen2501/232/orig 2025-09-07T07:34:58.3011298Z * [new branch] gh/laithsakka/156/base -> origin/gh/laithsakka/156/base 2025-09-07T07:34:58.3014809Z * [new branch] gh/laithsakka/156/head -> origin/gh/laithsakka/156/head 2025-09-07T07:34:58.3015118Z * [new branch] gh/laithsakka/156/orig -> origin/gh/laithsakka/156/orig 2025-09-07T07:34:58.3015363Z * [new branch] gh/laithsakka/160/base -> origin/gh/laithsakka/160/base 2025-09-07T07:34:58.3015522Z * [new branch] gh/laithsakka/160/head -> origin/gh/laithsakka/160/head 2025-09-07T07:34:58.3015778Z * [new branch] gh/laithsakka/160/orig -> origin/gh/laithsakka/160/orig 2025-09-07T07:34:58.3019290Z * [new branch] gh/laithsakka/178/base -> origin/gh/laithsakka/178/base 2025-09-07T07:34:58.3019548Z * [new branch] gh/laithsakka/178/head -> origin/gh/laithsakka/178/head 2025-09-07T07:34:58.3019743Z * [new branch] gh/laithsakka/178/orig -> origin/gh/laithsakka/178/orig 2025-09-07T07:34:58.3019907Z * [new branch] gh/laithsakka/191/base -> origin/gh/laithsakka/191/base 2025-09-07T07:34:58.3020154Z * [new branch] gh/laithsakka/191/head -> origin/gh/laithsakka/191/head 2025-09-07T07:34:58.3027362Z * [new branch] gh/laithsakka/191/orig -> origin/gh/laithsakka/191/orig 2025-09-07T07:34:58.3029348Z * [new branch] gh/laithsakka/237/base -> origin/gh/laithsakka/237/base 2025-09-07T07:34:58.3029681Z * [new branch] gh/laithsakka/237/head -> origin/gh/laithsakka/237/head 2025-09-07T07:34:58.3032597Z * [new branch] gh/laithsakka/237/orig -> origin/gh/laithsakka/237/orig 2025-09-07T07:34:58.3032919Z * [new branch] gh/laithsakka/249/base -> origin/gh/laithsakka/249/base 2025-09-07T07:34:58.3033087Z * [new branch] gh/laithsakka/249/head -> origin/gh/laithsakka/249/head 2025-09-07T07:34:58.3033298Z * [new branch] gh/laithsakka/249/orig -> origin/gh/laithsakka/249/orig 2025-09-07T07:34:58.3033528Z * [new branch] gh/laithsakka/251/base -> origin/gh/laithsakka/251/base 2025-09-07T07:34:58.3033695Z * [new branch] gh/laithsakka/251/head -> origin/gh/laithsakka/251/head 2025-09-07T07:34:58.3033937Z * [new branch] gh/laithsakka/251/orig -> origin/gh/laithsakka/251/orig 2025-09-07T07:34:58.3034106Z * [new branch] gh/laithsakka/254/base -> origin/gh/laithsakka/254/base 2025-09-07T07:34:58.3034396Z * [new branch] gh/laithsakka/254/head -> origin/gh/laithsakka/254/head 2025-09-07T07:34:58.3034890Z * [new branch] gh/laithsakka/254/orig -> origin/gh/laithsakka/254/orig 2025-09-07T07:34:58.3035075Z * [new branch] gh/laithsakka/255/base -> origin/gh/laithsakka/255/base 2025-09-07T07:34:58.3035264Z * [new branch] gh/laithsakka/255/head -> origin/gh/laithsakka/255/head 2025-09-07T07:34:58.3035415Z * [new branch] gh/laithsakka/255/orig -> origin/gh/laithsakka/255/orig 2025-09-07T07:34:58.3035571Z * [new branch] gh/laithsakka/256/base -> origin/gh/laithsakka/256/base 2025-09-07T07:34:58.3035723Z * [new branch] gh/laithsakka/256/head -> origin/gh/laithsakka/256/head 2025-09-07T07:34:58.3035882Z * [new branch] gh/laithsakka/256/orig -> origin/gh/laithsakka/256/orig 2025-09-07T07:34:58.3036049Z * [new branch] gh/laithsakka/257/base -> origin/gh/laithsakka/257/base 2025-09-07T07:34:58.3036193Z * [new branch] gh/laithsakka/257/head -> origin/gh/laithsakka/257/head 2025-09-07T07:34:58.3036344Z * [new branch] gh/laithsakka/257/orig -> origin/gh/laithsakka/257/orig 2025-09-07T07:34:58.3036487Z * [new branch] gh/laithsakka/258/base -> origin/gh/laithsakka/258/base 2025-09-07T07:34:58.3036974Z * [new branch] gh/laithsakka/258/head -> origin/gh/laithsakka/258/head 2025-09-07T07:34:58.3037138Z * [new branch] gh/laithsakka/258/orig -> origin/gh/laithsakka/258/orig 2025-09-07T07:34:58.3037295Z * [new branch] gh/laithsakka/259/base -> origin/gh/laithsakka/259/base 2025-09-07T07:34:58.3037437Z * [new branch] gh/laithsakka/259/head -> origin/gh/laithsakka/259/head 2025-09-07T07:34:58.3037578Z * [new branch] gh/laithsakka/259/orig -> origin/gh/laithsakka/259/orig 2025-09-07T07:34:58.3037744Z * [new branch] gh/laithsakka/260/base -> origin/gh/laithsakka/260/base 2025-09-07T07:34:58.3037886Z * [new branch] gh/laithsakka/260/head -> origin/gh/laithsakka/260/head 2025-09-07T07:34:58.3044157Z * [new branch] gh/laithsakka/260/orig -> origin/gh/laithsakka/260/orig 2025-09-07T07:34:58.3044534Z * [new branch] gh/laithsakka/261/base -> origin/gh/laithsakka/261/base 2025-09-07T07:34:58.3044934Z * [new branch] gh/laithsakka/261/head -> origin/gh/laithsakka/261/head 2025-09-07T07:34:58.3045350Z * [new branch] gh/laithsakka/261/orig -> origin/gh/laithsakka/261/orig 2025-09-07T07:34:58.3045990Z * [new branch] gh/laithsakka/262/base -> origin/gh/laithsakka/262/base 2025-09-07T07:34:58.3046199Z * [new branch] gh/laithsakka/262/head -> origin/gh/laithsakka/262/head 2025-09-07T07:34:58.3046370Z * [new branch] gh/laithsakka/262/orig -> origin/gh/laithsakka/262/orig 2025-09-07T07:34:58.3046543Z * [new branch] gh/laithsakka/263/base -> origin/gh/laithsakka/263/base 2025-09-07T07:34:58.3047080Z * [new branch] gh/laithsakka/263/head -> origin/gh/laithsakka/263/head 2025-09-07T07:34:58.3047265Z * [new branch] gh/laithsakka/263/orig -> origin/gh/laithsakka/263/orig 2025-09-07T07:34:58.3047427Z * [new branch] gh/laithsakka/264/base -> origin/gh/laithsakka/264/base 2025-09-07T07:34:58.3047588Z * [new branch] gh/laithsakka/264/head -> origin/gh/laithsakka/264/head 2025-09-07T07:34:58.3058195Z * [new branch] gh/laithsakka/264/orig -> origin/gh/laithsakka/264/orig 2025-09-07T07:34:58.3058775Z * [new branch] gh/laithsakka/265/base -> origin/gh/laithsakka/265/base 2025-09-07T07:34:58.3058996Z * [new branch] gh/laithsakka/265/head -> origin/gh/laithsakka/265/head 2025-09-07T07:34:58.3059389Z * [new branch] gh/laithsakka/265/orig -> origin/gh/laithsakka/265/orig 2025-09-07T07:34:58.3059560Z * [new branch] gh/laithsakka/266/base -> origin/gh/laithsakka/266/base 2025-09-07T07:34:58.3063147Z * [new branch] gh/laithsakka/266/head -> origin/gh/laithsakka/266/head 2025-09-07T07:34:58.3063711Z * [new branch] gh/laithsakka/266/orig -> origin/gh/laithsakka/266/orig 2025-09-07T07:34:58.3063919Z * [new branch] gh/laithsakka/267/base -> origin/gh/laithsakka/267/base 2025-09-07T07:34:58.3064078Z * [new branch] gh/laithsakka/267/head -> origin/gh/laithsakka/267/head 2025-09-07T07:34:58.3064268Z * [new branch] gh/laithsakka/267/orig -> origin/gh/laithsakka/267/orig 2025-09-07T07:34:58.3064424Z * [new branch] gh/laithsakka/268/base -> origin/gh/laithsakka/268/base 2025-09-07T07:34:58.3064574Z * [new branch] gh/laithsakka/268/head -> origin/gh/laithsakka/268/head 2025-09-07T07:34:58.3064722Z * [new branch] gh/laithsakka/268/orig -> origin/gh/laithsakka/268/orig 2025-09-07T07:34:58.3068075Z * [new branch] gh/laithsakka/28/base -> origin/gh/laithsakka/28/base 2025-09-07T07:34:58.3073889Z * [new branch] gh/laithsakka/29/base -> origin/gh/laithsakka/29/base 2025-09-07T07:34:58.3076225Z * [new branch] gh/laithsakka/30/base -> origin/gh/laithsakka/30/base 2025-09-07T07:34:58.3076609Z * [new branch] gh/laithsakka/30/head -> origin/gh/laithsakka/30/head 2025-09-07T07:34:58.3082084Z * [new branch] gh/laithsakka/31/base -> origin/gh/laithsakka/31/base 2025-09-07T07:34:58.3082314Z * [new branch] gh/laithsakka/31/head -> origin/gh/laithsakka/31/head 2025-09-07T07:34:58.3082495Z * [new branch] gh/laithsakka/32/base -> origin/gh/laithsakka/32/base 2025-09-07T07:34:58.3082658Z * [new branch] gh/laithsakka/32/head -> origin/gh/laithsakka/32/head 2025-09-07T07:34:58.3082845Z * [new branch] gh/lucaskabela/1/base -> origin/gh/lucaskabela/1/base 2025-09-07T07:34:58.3083013Z * [new branch] gh/lucaskabela/1/head -> origin/gh/lucaskabela/1/head 2025-09-07T07:34:58.3083184Z * [new branch] gh/lucaskabela/10/base -> origin/gh/lucaskabela/10/base 2025-09-07T07:34:58.3083352Z * [new branch] gh/lucaskabela/10/head -> origin/gh/lucaskabela/10/head 2025-09-07T07:34:58.3083523Z * [new branch] gh/lucaskabela/10/orig -> origin/gh/lucaskabela/10/orig 2025-09-07T07:34:58.3083684Z * [new branch] gh/lucaskabela/11/base -> origin/gh/lucaskabela/11/base 2025-09-07T07:34:58.3083845Z * [new branch] gh/lucaskabela/11/head -> origin/gh/lucaskabela/11/head 2025-09-07T07:34:58.3084014Z * [new branch] gh/lucaskabela/11/orig -> origin/gh/lucaskabela/11/orig 2025-09-07T07:34:58.3084182Z * [new branch] gh/lucaskabela/12/base -> origin/gh/lucaskabela/12/base 2025-09-07T07:34:58.3084339Z * [new branch] gh/lucaskabela/12/head -> origin/gh/lucaskabela/12/head 2025-09-07T07:34:58.3084497Z * [new branch] gh/lucaskabela/12/orig -> origin/gh/lucaskabela/12/orig 2025-09-07T07:34:58.3084667Z * [new branch] gh/lucaskabela/13/base -> origin/gh/lucaskabela/13/base 2025-09-07T07:34:58.3084831Z * [new branch] gh/lucaskabela/13/head -> origin/gh/lucaskabela/13/head 2025-09-07T07:34:58.3085031Z * [new branch] gh/lucaskabela/13/orig -> origin/gh/lucaskabela/13/orig 2025-09-07T07:34:58.3085189Z * [new branch] gh/lucaskabela/14/base -> origin/gh/lucaskabela/14/base 2025-09-07T07:34:58.3085356Z * [new branch] gh/lucaskabela/14/head -> origin/gh/lucaskabela/14/head 2025-09-07T07:34:58.3085514Z * [new branch] gh/lucaskabela/14/orig -> origin/gh/lucaskabela/14/orig 2025-09-07T07:34:58.3085816Z * [new branch] gh/lucaskabela/15/base -> origin/gh/lucaskabela/15/base 2025-09-07T07:34:58.3085991Z * [new branch] gh/lucaskabela/15/head -> origin/gh/lucaskabela/15/head 2025-09-07T07:34:58.3086157Z * [new branch] gh/lucaskabela/15/orig -> origin/gh/lucaskabela/15/orig 2025-09-07T07:34:58.3086327Z * [new branch] gh/lucaskabela/16/base -> origin/gh/lucaskabela/16/base 2025-09-07T07:34:58.3086486Z * [new branch] gh/lucaskabela/16/head -> origin/gh/lucaskabela/16/head 2025-09-07T07:34:58.3086657Z * [new branch] gh/lucaskabela/16/orig -> origin/gh/lucaskabela/16/orig 2025-09-07T07:34:58.3086953Z * [new branch] gh/lucaskabela/17/base -> origin/gh/lucaskabela/17/base 2025-09-07T07:34:58.3087122Z * [new branch] gh/lucaskabela/17/head -> origin/gh/lucaskabela/17/head 2025-09-07T07:34:58.3087296Z * [new branch] gh/lucaskabela/17/orig -> origin/gh/lucaskabela/17/orig 2025-09-07T07:34:58.3087463Z * [new branch] gh/lucaskabela/2/base -> origin/gh/lucaskabela/2/base 2025-09-07T07:34:58.3091186Z * [new branch] gh/lucaskabela/2/head -> origin/gh/lucaskabela/2/head 2025-09-07T07:34:58.3091398Z * [new branch] gh/lucaskabela/2/orig -> origin/gh/lucaskabela/2/orig 2025-09-07T07:34:58.3091725Z * [new branch] gh/lucaskabela/3/base -> origin/gh/lucaskabela/3/base 2025-09-07T07:34:58.3091897Z * [new branch] gh/lucaskabela/3/head -> origin/gh/lucaskabela/3/head 2025-09-07T07:34:58.3092056Z * [new branch] gh/lucaskabela/3/orig -> origin/gh/lucaskabela/3/orig 2025-09-07T07:34:58.3092733Z * [new branch] gh/lucaskabela/4/base -> origin/gh/lucaskabela/4/base 2025-09-07T07:34:58.3093347Z * [new branch] gh/lucaskabela/4/head -> origin/gh/lucaskabela/4/head 2025-09-07T07:34:58.3094238Z * [new branch] gh/lucaskabela/4/orig -> origin/gh/lucaskabela/4/orig 2025-09-07T07:34:58.3095221Z * [new branch] gh/lucaskabela/5/base -> origin/gh/lucaskabela/5/base 2025-09-07T07:34:58.3095507Z * [new branch] gh/lucaskabela/5/head -> origin/gh/lucaskabela/5/head 2025-09-07T07:34:58.3096670Z * [new branch] gh/lucaskabela/5/orig -> origin/gh/lucaskabela/5/orig 2025-09-07T07:34:58.3097100Z * [new branch] gh/lucaskabela/6/base -> origin/gh/lucaskabela/6/base 2025-09-07T07:34:58.3100307Z * [new branch] gh/lucaskabela/6/head -> origin/gh/lucaskabela/6/head 2025-09-07T07:34:58.3100666Z * [new branch] gh/lucaskabela/6/orig -> origin/gh/lucaskabela/6/orig 2025-09-07T07:34:58.3100816Z * [new branch] gh/lucaskabela/7/base -> origin/gh/lucaskabela/7/base 2025-09-07T07:34:58.3100968Z * [new branch] gh/lucaskabela/7/head -> origin/gh/lucaskabela/7/head 2025-09-07T07:34:58.3101134Z * [new branch] gh/lucaskabela/7/orig -> origin/gh/lucaskabela/7/orig 2025-09-07T07:34:58.3101583Z * [new branch] gh/lucaskabela/8/base -> origin/gh/lucaskabela/8/base 2025-09-07T07:34:58.3106993Z * [new branch] gh/lucaskabela/8/head -> origin/gh/lucaskabela/8/head 2025-09-07T07:34:58.3107230Z * [new branch] gh/lucaskabela/8/orig -> origin/gh/lucaskabela/8/orig 2025-09-07T07:34:58.3107619Z * [new branch] gh/lucaskabela/9/base -> origin/gh/lucaskabela/9/base 2025-09-07T07:34:58.3107801Z * [new branch] gh/lucaskabela/9/head -> origin/gh/lucaskabela/9/head 2025-09-07T07:34:58.3107951Z * [new branch] gh/lucaskabela/9/orig -> origin/gh/lucaskabela/9/orig 2025-09-07T07:34:58.3108092Z * [new branch] gh/lw/3/base -> origin/gh/lw/3/base 2025-09-07T07:34:58.3108280Z * [new branch] gh/lw/3/head -> origin/gh/lw/3/head 2025-09-07T07:34:58.3108402Z * [new branch] gh/lw/3/orig -> origin/gh/lw/3/orig 2025-09-07T07:34:58.3110429Z * [new branch] gh/malfet/14/base -> origin/gh/malfet/14/base 2025-09-07T07:34:58.3110622Z * [new branch] gh/malfet/330/base -> origin/gh/malfet/330/base 2025-09-07T07:34:58.3110788Z * [new branch] gh/malfet/330/head -> origin/gh/malfet/330/head 2025-09-07T07:34:58.3143990Z * [new branch] gh/malfet/330/orig -> origin/gh/malfet/330/orig 2025-09-07T07:34:58.3144413Z * [new branch] gh/malfet/396/base -> origin/gh/malfet/396/base 2025-09-07T07:34:58.3144662Z * [new branch] gh/malfet/396/head -> origin/gh/malfet/396/head 2025-09-07T07:34:58.3144833Z * [new branch] gh/malfet/396/orig -> origin/gh/malfet/396/orig 2025-09-07T07:34:58.3144960Z * [new branch] gh/malfet/397/base -> origin/gh/malfet/397/base 2025-09-07T07:34:58.3145288Z * [new branch] gh/malfet/397/head -> origin/gh/malfet/397/head 2025-09-07T07:34:58.3145417Z * [new branch] gh/malfet/397/orig -> origin/gh/malfet/397/orig 2025-09-07T07:34:58.3145682Z * [new branch] gh/malfet/398/base -> origin/gh/malfet/398/base 2025-09-07T07:34:58.3146650Z * [new branch] gh/malfet/398/head -> origin/gh/malfet/398/head 2025-09-07T07:34:58.3147015Z * [new branch] gh/malfet/398/orig -> origin/gh/malfet/398/orig 2025-09-07T07:34:58.3147320Z * [new branch] gh/malfet/399/base -> origin/gh/malfet/399/base 2025-09-07T07:34:58.3147554Z * [new branch] gh/malfet/399/head -> origin/gh/malfet/399/head 2025-09-07T07:34:58.3148226Z * [new branch] gh/malfet/399/orig -> origin/gh/malfet/399/orig 2025-09-07T07:34:58.3148464Z * [new branch] gh/malfet/414/base -> origin/gh/malfet/414/base 2025-09-07T07:34:58.3148627Z * [new branch] gh/malfet/414/head -> origin/gh/malfet/414/head 2025-09-07T07:34:58.3148772Z * [new branch] gh/malfet/414/orig -> origin/gh/malfet/414/orig 2025-09-07T07:34:58.3148921Z * [new branch] gh/malfet/417/base -> origin/gh/malfet/417/base 2025-09-07T07:34:58.3149068Z * [new branch] gh/malfet/417/head -> origin/gh/malfet/417/head 2025-09-07T07:34:58.3149219Z * [new branch] gh/malfet/417/orig -> origin/gh/malfet/417/orig 2025-09-07T07:34:58.3149365Z * [new branch] gh/malfet/418/base -> origin/gh/malfet/418/base 2025-09-07T07:34:58.3149504Z * [new branch] gh/malfet/418/head -> origin/gh/malfet/418/head 2025-09-07T07:34:58.3149653Z * [new branch] gh/malfet/418/orig -> origin/gh/malfet/418/orig 2025-09-07T07:34:58.3149795Z * [new branch] gh/malfet/475/base -> origin/gh/malfet/475/base 2025-09-07T07:34:58.3149944Z * [new branch] gh/malfet/475/head -> origin/gh/malfet/475/head 2025-09-07T07:34:58.3150082Z * [new branch] gh/malfet/475/orig -> origin/gh/malfet/475/orig 2025-09-07T07:34:58.3150272Z * [new branch] gh/malfet/476/base -> origin/gh/malfet/476/base 2025-09-07T07:34:58.3150432Z * [new branch] gh/malfet/476/head -> origin/gh/malfet/476/head 2025-09-07T07:34:58.3150567Z * [new branch] gh/malfet/476/orig -> origin/gh/malfet/476/orig 2025-09-07T07:34:58.3150713Z * [new branch] gh/malfet/477/base -> origin/gh/malfet/477/base 2025-09-07T07:34:58.3150851Z * [new branch] gh/malfet/477/head -> origin/gh/malfet/477/head 2025-09-07T07:34:58.3150998Z * [new branch] gh/malfet/477/orig -> origin/gh/malfet/477/orig 2025-09-07T07:34:58.3151337Z * [new branch] gh/malfet/478/base -> origin/gh/malfet/478/base 2025-09-07T07:34:58.3151476Z * [new branch] gh/malfet/478/head -> origin/gh/malfet/478/head 2025-09-07T07:34:58.3151623Z * [new branch] gh/malfet/478/orig -> origin/gh/malfet/478/orig 2025-09-07T07:34:58.3151761Z * [new branch] gh/malfet/479/base -> origin/gh/malfet/479/base 2025-09-07T07:34:58.3151910Z * [new branch] gh/malfet/479/head -> origin/gh/malfet/479/head 2025-09-07T07:34:58.3152049Z * [new branch] gh/malfet/479/orig -> origin/gh/malfet/479/orig 2025-09-07T07:34:58.3152193Z * [new branch] gh/malfet/480/base -> origin/gh/malfet/480/base 2025-09-07T07:34:58.3152337Z * [new branch] gh/malfet/480/head -> origin/gh/malfet/480/head 2025-09-07T07:34:58.3152475Z * [new branch] gh/malfet/480/orig -> origin/gh/malfet/480/orig 2025-09-07T07:34:58.3152623Z * [new branch] gh/malfet/481/base -> origin/gh/malfet/481/base 2025-09-07T07:34:58.3152760Z * [new branch] gh/malfet/481/head -> origin/gh/malfet/481/head 2025-09-07T07:34:58.3152903Z * [new branch] gh/malfet/481/orig -> origin/gh/malfet/481/orig 2025-09-07T07:34:58.3153039Z * [new branch] gh/malfet/482/base -> origin/gh/malfet/482/base 2025-09-07T07:34:58.3153229Z * [new branch] gh/malfet/482/head -> origin/gh/malfet/482/head 2025-09-07T07:34:58.3153365Z * [new branch] gh/malfet/482/orig -> origin/gh/malfet/482/orig 2025-09-07T07:34:58.3153498Z * [new branch] gh/malfet/483/base -> origin/gh/malfet/483/base 2025-09-07T07:34:58.3153639Z * [new branch] gh/malfet/483/head -> origin/gh/malfet/483/head 2025-09-07T07:34:58.3153772Z * [new branch] gh/malfet/483/orig -> origin/gh/malfet/483/orig 2025-09-07T07:34:58.3153928Z * [new branch] gh/malfet/484/base -> origin/gh/malfet/484/base 2025-09-07T07:34:58.3154061Z * [new branch] gh/malfet/484/head -> origin/gh/malfet/484/head 2025-09-07T07:34:58.3154194Z * [new branch] gh/malfet/484/orig -> origin/gh/malfet/484/orig 2025-09-07T07:34:58.3154335Z * [new branch] gh/malfet/485/base -> origin/gh/malfet/485/base 2025-09-07T07:34:58.3154483Z * [new branch] gh/malfet/485/head -> origin/gh/malfet/485/head 2025-09-07T07:34:58.3154615Z * [new branch] gh/malfet/485/orig -> origin/gh/malfet/485/orig 2025-09-07T07:34:58.3154740Z * [new branch] gh/malfet/486/base -> origin/gh/malfet/486/base 2025-09-07T07:34:58.3154870Z * [new branch] gh/malfet/486/head -> origin/gh/malfet/486/head 2025-09-07T07:34:58.3155002Z * [new branch] gh/malfet/486/orig -> origin/gh/malfet/486/orig 2025-09-07T07:34:58.3155200Z * [new branch] gh/malfet/487/base -> origin/gh/malfet/487/base 2025-09-07T07:34:58.3156477Z * [new branch] gh/malfet/487/head -> origin/gh/malfet/487/head 2025-09-07T07:34:58.3156649Z * [new branch] gh/malfet/487/orig -> origin/gh/malfet/487/orig 2025-09-07T07:34:58.3161452Z * [new branch] gh/malfet/488/base -> origin/gh/malfet/488/base 2025-09-07T07:34:58.3162027Z * [new branch] gh/malfet/488/head -> origin/gh/malfet/488/head 2025-09-07T07:34:58.3162220Z * [new branch] gh/malfet/488/orig -> origin/gh/malfet/488/orig 2025-09-07T07:34:58.3162366Z * [new branch] gh/malfet/489/base -> origin/gh/malfet/489/base 2025-09-07T07:34:58.3162515Z * [new branch] gh/malfet/489/head -> origin/gh/malfet/489/head 2025-09-07T07:34:58.3162670Z * [new branch] gh/malfet/489/orig -> origin/gh/malfet/489/orig 2025-09-07T07:34:58.3162976Z * [new branch] gh/malfet/490/base -> origin/gh/malfet/490/base 2025-09-07T07:34:58.3163134Z * [new branch] gh/malfet/490/head -> origin/gh/malfet/490/head 2025-09-07T07:34:58.3163513Z * [new branch] gh/malfet/490/orig -> origin/gh/malfet/490/orig 2025-09-07T07:34:58.3165488Z * [new branch] gh/malfet/491/base -> origin/gh/malfet/491/base 2025-09-07T07:34:58.3165664Z * [new branch] gh/malfet/491/head -> origin/gh/malfet/491/head 2025-09-07T07:34:58.3165815Z * [new branch] gh/malfet/491/orig -> origin/gh/malfet/491/orig 2025-09-07T07:34:58.3167250Z * [new branch] gh/malfet/492/base -> origin/gh/malfet/492/base 2025-09-07T07:34:58.3167610Z * [new branch] gh/malfet/492/head -> origin/gh/malfet/492/head 2025-09-07T07:34:58.3172728Z * [new branch] gh/malfet/492/orig -> origin/gh/malfet/492/orig 2025-09-07T07:34:58.3172915Z * [new branch] gh/malfet/493/base -> origin/gh/malfet/493/base 2025-09-07T07:34:58.3173049Z * [new branch] gh/malfet/493/head -> origin/gh/malfet/493/head 2025-09-07T07:34:58.3173190Z * [new branch] gh/malfet/493/orig -> origin/gh/malfet/493/orig 2025-09-07T07:34:58.3173488Z * [new branch] gh/malfet/494/base -> origin/gh/malfet/494/base 2025-09-07T07:34:58.3173631Z * [new branch] gh/malfet/494/head -> origin/gh/malfet/494/head 2025-09-07T07:34:58.3173764Z * [new branch] gh/malfet/494/orig -> origin/gh/malfet/494/orig 2025-09-07T07:34:58.3174030Z * [new branch] gh/malfet/495/base -> origin/gh/malfet/495/base 2025-09-07T07:34:58.3176259Z * [new branch] gh/malfet/495/head -> origin/gh/malfet/495/head 2025-09-07T07:34:58.3176580Z * [new branch] gh/malfet/495/orig -> origin/gh/malfet/495/orig 2025-09-07T07:34:58.3179113Z * [new branch] gh/malfet/496/base -> origin/gh/malfet/496/base 2025-09-07T07:34:58.3179443Z * [new branch] gh/malfet/496/head -> origin/gh/malfet/496/head 2025-09-07T07:34:58.3179593Z * [new branch] gh/malfet/496/orig -> origin/gh/malfet/496/orig 2025-09-07T07:34:58.3179826Z * [new branch] gh/malfet/497/base -> origin/gh/malfet/497/base 2025-09-07T07:34:58.3179997Z * [new branch] gh/malfet/497/head -> origin/gh/malfet/497/head 2025-09-07T07:34:58.3180281Z * [new branch] gh/malfet/497/orig -> origin/gh/malfet/497/orig 2025-09-07T07:34:58.3181723Z * [new branch] gh/malfet/498/base -> origin/gh/malfet/498/base 2025-09-07T07:34:58.3182047Z * [new branch] gh/malfet/498/head -> origin/gh/malfet/498/head 2025-09-07T07:34:58.3182219Z * [new branch] gh/malfet/498/orig -> origin/gh/malfet/498/orig 2025-09-07T07:34:58.3182469Z * [new branch] gh/malfet/499/base -> origin/gh/malfet/499/base 2025-09-07T07:34:58.3185585Z * [new branch] gh/malfet/499/head -> origin/gh/malfet/499/head 2025-09-07T07:34:58.3185770Z * [new branch] gh/malfet/499/orig -> origin/gh/malfet/499/orig 2025-09-07T07:34:58.3185970Z * [new branch] gh/malfet/500/base -> origin/gh/malfet/500/base 2025-09-07T07:34:58.3186106Z * [new branch] gh/malfet/500/head -> origin/gh/malfet/500/head 2025-09-07T07:34:58.3186282Z * [new branch] gh/malfet/500/orig -> origin/gh/malfet/500/orig 2025-09-07T07:34:58.3191165Z * [new branch] gh/malfet/501/base -> origin/gh/malfet/501/base 2025-09-07T07:34:58.3191347Z * [new branch] gh/malfet/501/head -> origin/gh/malfet/501/head 2025-09-07T07:34:58.3191655Z * [new branch] gh/malfet/501/orig -> origin/gh/malfet/501/orig 2025-09-07T07:34:58.3191807Z * [new branch] gh/malfet/502/base -> origin/gh/malfet/502/base 2025-09-07T07:34:58.3191967Z * [new branch] gh/malfet/502/head -> origin/gh/malfet/502/head 2025-09-07T07:34:58.3192114Z * [new branch] gh/malfet/502/orig -> origin/gh/malfet/502/orig 2025-09-07T07:34:58.3192281Z * [new branch] gh/malfet/503/base -> origin/gh/malfet/503/base 2025-09-07T07:34:58.3192821Z * [new branch] gh/malfet/503/head -> origin/gh/malfet/503/head 2025-09-07T07:34:58.3193446Z * [new branch] gh/malfet/503/orig -> origin/gh/malfet/503/orig 2025-09-07T07:34:58.3194875Z * [new branch] gh/malfet/504/base -> origin/gh/malfet/504/base 2025-09-07T07:34:58.3195152Z * [new branch] gh/malfet/504/head -> origin/gh/malfet/504/head 2025-09-07T07:34:58.3195655Z * [new branch] gh/malfet/504/orig -> origin/gh/malfet/504/orig 2025-09-07T07:34:58.3200742Z * [new branch] gh/malfet/505/base -> origin/gh/malfet/505/base 2025-09-07T07:34:58.3201218Z * [new branch] gh/malfet/505/head -> origin/gh/malfet/505/head 2025-09-07T07:34:58.3201363Z * [new branch] gh/malfet/505/orig -> origin/gh/malfet/505/orig 2025-09-07T07:34:58.3201742Z * [new branch] gh/malfet/506/base -> origin/gh/malfet/506/base 2025-09-07T07:34:58.3201880Z * [new branch] gh/malfet/506/head -> origin/gh/malfet/506/head 2025-09-07T07:34:58.3202022Z * [new branch] gh/malfet/506/orig -> origin/gh/malfet/506/orig 2025-09-07T07:34:58.3202167Z * [new branch] gh/malfet/507/base -> origin/gh/malfet/507/base 2025-09-07T07:34:58.3202324Z * [new branch] gh/malfet/507/head -> origin/gh/malfet/507/head 2025-09-07T07:34:58.3202471Z * [new branch] gh/malfet/507/orig -> origin/gh/malfet/507/orig 2025-09-07T07:34:58.3204253Z * [new branch] gh/malfet/508/base -> origin/gh/malfet/508/base 2025-09-07T07:34:58.3204579Z * [new branch] gh/malfet/508/head -> origin/gh/malfet/508/head 2025-09-07T07:34:58.3205115Z * [new branch] gh/malfet/508/orig -> origin/gh/malfet/508/orig 2025-09-07T07:34:58.3206154Z * [new branch] gh/malfet/509/base -> origin/gh/malfet/509/base 2025-09-07T07:34:58.3206513Z * [new branch] gh/malfet/509/head -> origin/gh/malfet/509/head 2025-09-07T07:34:58.3211743Z * [new branch] gh/malfet/509/orig -> origin/gh/malfet/509/orig 2025-09-07T07:34:58.3211901Z * [new branch] gh/malfet/510/base -> origin/gh/malfet/510/base 2025-09-07T07:34:58.3212033Z * [new branch] gh/malfet/510/head -> origin/gh/malfet/510/head 2025-09-07T07:34:58.3212183Z * [new branch] gh/malfet/510/orig -> origin/gh/malfet/510/orig 2025-09-07T07:34:58.3212320Z * [new branch] gh/malfet/511/base -> origin/gh/malfet/511/base 2025-09-07T07:34:58.3212464Z * [new branch] gh/malfet/511/head -> origin/gh/malfet/511/head 2025-09-07T07:34:58.3212599Z * [new branch] gh/malfet/511/orig -> origin/gh/malfet/511/orig 2025-09-07T07:34:58.3218000Z * [new branch] gh/malfet/512/base -> origin/gh/malfet/512/base 2025-09-07T07:34:58.3224343Z * [new branch] gh/malfet/512/head -> origin/gh/malfet/512/head 2025-09-07T07:34:58.3231876Z * [new branch] gh/malfet/512/orig -> origin/gh/malfet/512/orig 2025-09-07T07:34:58.3236925Z * [new branch] gh/malfet/513/base -> origin/gh/malfet/513/base 2025-09-07T07:34:58.3242419Z * [new branch] gh/malfet/513/head -> origin/gh/malfet/513/head 2025-09-07T07:34:58.3245188Z * [new branch] gh/malfet/513/orig -> origin/gh/malfet/513/orig 2025-09-07T07:34:58.3245598Z * [new branch] gh/malfet/64/base -> origin/gh/malfet/64/base 2025-09-07T07:34:58.3245767Z * [new branch] gh/malfet/64/head -> origin/gh/malfet/64/head 2025-09-07T07:34:58.3245971Z * [new branch] gh/manuelcandales/10/base -> origin/gh/manuelcandales/10/base 2025-09-07T07:34:58.3246151Z * [new branch] gh/manuelcandales/10/head -> origin/gh/manuelcandales/10/head 2025-09-07T07:34:58.3246330Z * [new branch] gh/manuelcandales/10/orig -> origin/gh/manuelcandales/10/orig 2025-09-07T07:34:58.3246530Z * [new branch] gh/manuelcandales/11/base -> origin/gh/manuelcandales/11/base 2025-09-07T07:34:58.3246749Z * [new branch] gh/manuelcandales/11/head -> origin/gh/manuelcandales/11/head 2025-09-07T07:34:58.3246937Z * [new branch] gh/manuelcandales/11/orig -> origin/gh/manuelcandales/11/orig 2025-09-07T07:34:58.3247134Z * [new branch] gh/manuelcandales/9/base -> origin/gh/manuelcandales/9/base 2025-09-07T07:34:58.3247307Z * [new branch] gh/manuelcandales/9/head -> origin/gh/manuelcandales/9/head 2025-09-07T07:34:58.3247471Z * [new branch] gh/manuelcandales/9/orig -> origin/gh/manuelcandales/9/orig 2025-09-07T07:34:58.3247820Z * [new branch] gh/markkm/1/base -> origin/gh/markkm/1/base 2025-09-07T07:34:58.3247981Z * [new branch] gh/masnesral/204/base -> origin/gh/masnesral/204/base 2025-09-07T07:34:58.3248137Z * [new branch] gh/masnesral/204/head -> origin/gh/masnesral/204/head 2025-09-07T07:34:58.3248300Z * [new branch] gh/masnesral/204/orig -> origin/gh/masnesral/204/orig 2025-09-07T07:34:58.3248450Z * [new branch] gh/masnesral/235/base -> origin/gh/masnesral/235/base 2025-09-07T07:34:58.3248606Z * [new branch] gh/masnesral/235/head -> origin/gh/masnesral/235/head 2025-09-07T07:34:58.3248757Z * [new branch] gh/masnesral/235/orig -> origin/gh/masnesral/235/orig 2025-09-07T07:34:58.3248914Z * [new branch] gh/masnesral/34/base -> origin/gh/masnesral/34/base 2025-09-07T07:34:58.3249064Z * [new branch] gh/mhorowitz/0/base -> origin/gh/mhorowitz/0/base 2025-09-07T07:34:58.3249218Z * [new branch] gh/mhorowitz/0/head -> origin/gh/mhorowitz/0/head 2025-09-07T07:34:58.3249376Z * [new branch] gh/mhorowitz/1/base -> origin/gh/mhorowitz/1/base 2025-09-07T07:34:58.3249509Z * [new branch] gh/mhorowitz/1/head -> origin/gh/mhorowitz/1/head 2025-09-07T07:34:58.3249652Z * [new branch] gh/mhorowitz/2/base -> origin/gh/mhorowitz/2/base 2025-09-07T07:34:58.3249785Z * [new branch] gh/mhorowitz/2/head -> origin/gh/mhorowitz/2/head 2025-09-07T07:34:58.3249921Z * [new branch] gh/mhorowitz/3/base -> origin/gh/mhorowitz/3/base 2025-09-07T07:34:58.3250066Z * [new branch] gh/mhorowitz/3/head -> origin/gh/mhorowitz/3/head 2025-09-07T07:34:58.3250199Z * [new branch] gh/mhorowitz/4/base -> origin/gh/mhorowitz/4/base 2025-09-07T07:34:58.3250341Z * [new branch] gh/mhorowitz/4/head -> origin/gh/mhorowitz/4/head 2025-09-07T07:34:58.3250484Z * [new branch] gh/mhorowitz/5/base -> origin/gh/mhorowitz/5/base 2025-09-07T07:34:58.3250624Z * [new branch] gh/mhorowitz/5/head -> origin/gh/mhorowitz/5/head 2025-09-07T07:34:58.3250754Z * [new branch] gh/mhorowitz/6/base -> origin/gh/mhorowitz/6/base 2025-09-07T07:34:58.3250886Z * [new branch] gh/mhorowitz/6/head -> origin/gh/mhorowitz/6/head 2025-09-07T07:34:58.3251070Z * [new branch] gh/mikaylagawarecki/234/base -> origin/gh/mikaylagawarecki/234/base 2025-09-07T07:34:58.3251297Z * [new branch] gh/mikaylagawarecki/234/head -> origin/gh/mikaylagawarecki/234/head 2025-09-07T07:34:58.3251467Z * [new branch] gh/mikaylagawarecki/235/base -> origin/gh/mikaylagawarecki/235/base 2025-09-07T07:34:58.3251631Z * [new branch] gh/mikaylagawarecki/235/head -> origin/gh/mikaylagawarecki/235/head 2025-09-07T07:34:58.3251805Z * [new branch] gh/mikaylagawarecki/236/base -> origin/gh/mikaylagawarecki/236/base 2025-09-07T07:34:58.3251968Z * [new branch] gh/mikaylagawarecki/236/head -> origin/gh/mikaylagawarecki/236/head 2025-09-07T07:34:58.3252128Z * [new branch] gh/mikaylagawarecki/237/base -> origin/gh/mikaylagawarecki/237/base 2025-09-07T07:34:58.3252296Z * [new branch] gh/mikaylagawarecki/237/head -> origin/gh/mikaylagawarecki/237/head 2025-09-07T07:34:58.3252457Z * [new branch] gh/mikaylagawarecki/238/base -> origin/gh/mikaylagawarecki/238/base 2025-09-07T07:34:58.3252629Z * [new branch] gh/mikaylagawarecki/238/head -> origin/gh/mikaylagawarecki/238/head 2025-09-07T07:34:58.3252793Z * [new branch] gh/mikaylagawarecki/317/base -> origin/gh/mikaylagawarecki/317/base 2025-09-07T07:34:58.3258149Z * [new branch] gh/mikaylagawarecki/317/head -> origin/gh/mikaylagawarecki/317/head 2025-09-07T07:34:58.3258726Z * [new branch] gh/mikaylagawarecki/317/orig -> origin/gh/mikaylagawarecki/317/orig 2025-09-07T07:34:58.3258942Z * [new branch] gh/mikaylagawarecki/320/base -> origin/gh/mikaylagawarecki/320/base 2025-09-07T07:34:58.3259128Z * [new branch] gh/mikaylagawarecki/320/head -> origin/gh/mikaylagawarecki/320/head 2025-09-07T07:34:58.3259310Z * [new branch] gh/mikaylagawarecki/320/orig -> origin/gh/mikaylagawarecki/320/orig 2025-09-07T07:34:58.3259489Z * [new branch] gh/mikaylagawarecki/329/base -> origin/gh/mikaylagawarecki/329/base 2025-09-07T07:34:58.3259682Z * [new branch] gh/mikaylagawarecki/329/head -> origin/gh/mikaylagawarecki/329/head 2025-09-07T07:34:58.3259866Z * [new branch] gh/mikaylagawarecki/329/orig -> origin/gh/mikaylagawarecki/329/orig 2025-09-07T07:34:58.3260044Z * [new branch] gh/mikaylagawarecki/330/base -> origin/gh/mikaylagawarecki/330/base 2025-09-07T07:34:58.3260414Z * [new branch] gh/mikaylagawarecki/330/head -> origin/gh/mikaylagawarecki/330/head 2025-09-07T07:34:58.3260606Z * [new branch] gh/mikaylagawarecki/330/orig -> origin/gh/mikaylagawarecki/330/orig 2025-09-07T07:34:58.3260795Z * [new branch] gh/mikaylagawarecki/331/base -> origin/gh/mikaylagawarecki/331/base 2025-09-07T07:34:58.3262043Z * [new branch] gh/mikaylagawarecki/331/head -> origin/gh/mikaylagawarecki/331/head 2025-09-07T07:34:58.3262276Z * [new branch] gh/mikaylagawarecki/331/orig -> origin/gh/mikaylagawarecki/331/orig 2025-09-07T07:34:58.3263404Z * [new branch] gh/mikaylagawarecki/332/base -> origin/gh/mikaylagawarecki/332/base 2025-09-07T07:34:58.3263687Z * [new branch] gh/mikaylagawarecki/332/head -> origin/gh/mikaylagawarecki/332/head 2025-09-07T07:34:58.3266912Z * [new branch] gh/mikaylagawarecki/332/orig -> origin/gh/mikaylagawarecki/332/orig 2025-09-07T07:34:58.3267303Z * [new branch] gh/mikaylagawarecki/334/base -> origin/gh/mikaylagawarecki/334/base 2025-09-07T07:34:58.3267623Z * [new branch] gh/mikaylagawarecki/334/head -> origin/gh/mikaylagawarecki/334/head 2025-09-07T07:34:58.3267893Z * [new branch] gh/mikaylagawarecki/334/orig -> origin/gh/mikaylagawarecki/334/orig 2025-09-07T07:34:58.3268096Z * [new branch] gh/mikaylagawarecki/335/base -> origin/gh/mikaylagawarecki/335/base 2025-09-07T07:34:58.3268283Z * [new branch] gh/mikaylagawarecki/335/head -> origin/gh/mikaylagawarecki/335/head 2025-09-07T07:34:58.3268930Z * [new branch] gh/mikaylagawarecki/335/orig -> origin/gh/mikaylagawarecki/335/orig 2025-09-07T07:34:58.3275416Z * [new branch] gh/mikaylagawarecki/336/base -> origin/gh/mikaylagawarecki/336/base 2025-09-07T07:34:58.3279705Z * [new branch] gh/mikaylagawarecki/336/head -> origin/gh/mikaylagawarecki/336/head 2025-09-07T07:34:58.3281572Z * [new branch] gh/mikaylagawarecki/336/orig -> origin/gh/mikaylagawarecki/336/orig 2025-09-07T07:34:58.3281787Z * [new branch] gh/mikaylagawarecki/337/base -> origin/gh/mikaylagawarecki/337/base 2025-09-07T07:34:58.3281966Z * [new branch] gh/mikaylagawarecki/337/head -> origin/gh/mikaylagawarecki/337/head 2025-09-07T07:34:58.3282151Z * [new branch] gh/mikaylagawarecki/337/orig -> origin/gh/mikaylagawarecki/337/orig 2025-09-07T07:34:58.3282322Z * [new branch] gh/mikaylagawarecki/338/base -> origin/gh/mikaylagawarecki/338/base 2025-09-07T07:34:58.3282499Z * [new branch] gh/mikaylagawarecki/338/head -> origin/gh/mikaylagawarecki/338/head 2025-09-07T07:34:58.3282687Z * [new branch] gh/mikaylagawarecki/338/orig -> origin/gh/mikaylagawarecki/338/orig 2025-09-07T07:34:58.3282858Z * [new branch] gh/mikaylagawarecki/339/base -> origin/gh/mikaylagawarecki/339/base 2025-09-07T07:34:58.3283036Z * [new branch] gh/mikaylagawarecki/339/head -> origin/gh/mikaylagawarecki/339/head 2025-09-07T07:34:58.3283338Z * [new branch] gh/mikaylagawarecki/339/orig -> origin/gh/mikaylagawarecki/339/orig 2025-09-07T07:34:58.3283509Z * [new branch] gh/mlazos/1/base -> origin/gh/mlazos/1/base 2025-09-07T07:34:58.3283653Z * [new branch] gh/mlazos/1/head -> origin/gh/mlazos/1/head 2025-09-07T07:34:58.3283796Z * [new branch] gh/mlazos/1/orig -> origin/gh/mlazos/1/orig 2025-09-07T07:34:58.3283942Z * [new branch] gh/mlazos/12/base -> origin/gh/mlazos/12/base 2025-09-07T07:34:58.3284086Z * [new branch] gh/mlazos/12/head -> origin/gh/mlazos/12/head 2025-09-07T07:34:58.3284231Z * [new branch] gh/mlazos/12/orig -> origin/gh/mlazos/12/orig 2025-09-07T07:34:58.3285485Z * [new branch] gh/mlazos/13/base -> origin/gh/mlazos/13/base 2025-09-07T07:34:58.3285633Z * [new branch] gh/mlazos/13/head -> origin/gh/mlazos/13/head 2025-09-07T07:34:58.3285772Z * [new branch] gh/mlazos/13/orig -> origin/gh/mlazos/13/orig 2025-09-07T07:34:58.3285915Z * [new branch] gh/mlazos/14/base -> origin/gh/mlazos/14/base 2025-09-07T07:34:58.3286052Z * [new branch] gh/mlazos/14/head -> origin/gh/mlazos/14/head 2025-09-07T07:34:58.3287276Z * [new branch] gh/mlazos/14/orig -> origin/gh/mlazos/14/orig 2025-09-07T07:34:58.3291890Z * [new branch] gh/mlazos/15/base -> origin/gh/mlazos/15/base 2025-09-07T07:34:58.3296474Z * [new branch] gh/mlazos/15/head -> origin/gh/mlazos/15/head 2025-09-07T07:34:58.3296639Z * [new branch] gh/mlazos/15/orig -> origin/gh/mlazos/15/orig 2025-09-07T07:34:58.3296776Z * [new branch] gh/mlazos/16/base -> origin/gh/mlazos/16/base 2025-09-07T07:34:58.3296932Z * [new branch] gh/mlazos/16/head -> origin/gh/mlazos/16/head 2025-09-07T07:34:58.3297068Z * [new branch] gh/mlazos/16/orig -> origin/gh/mlazos/16/orig 2025-09-07T07:34:58.3297204Z * [new branch] gh/mlazos/17/base -> origin/gh/mlazos/17/base 2025-09-07T07:34:58.3297334Z * [new branch] gh/mlazos/17/head -> origin/gh/mlazos/17/head 2025-09-07T07:34:58.3297462Z * [new branch] gh/mlazos/17/orig -> origin/gh/mlazos/17/orig 2025-09-07T07:34:58.3297609Z * [new branch] gh/mlazos/2/base -> origin/gh/mlazos/2/base 2025-09-07T07:34:58.3297884Z * [new branch] gh/mlazos/2/head -> origin/gh/mlazos/2/head 2025-09-07T07:34:58.3298020Z * [new branch] gh/mlazos/2/orig -> origin/gh/mlazos/2/orig 2025-09-07T07:34:58.3303828Z * [new branch] gh/mlazos/3/base -> origin/gh/mlazos/3/base 2025-09-07T07:34:58.3309422Z * [new branch] gh/mlazos/3/head -> origin/gh/mlazos/3/head 2025-09-07T07:34:58.3314550Z * [new branch] gh/mlazos/3/orig -> origin/gh/mlazos/3/orig 2025-09-07T07:34:58.3319020Z * [new branch] gh/mrmiywj/1/base -> origin/gh/mrmiywj/1/base 2025-09-07T07:34:58.3320943Z * [new branch] gh/mrmiywj/1/head -> origin/gh/mrmiywj/1/head 2025-09-07T07:34:58.3321142Z * [new branch] gh/muchulee8/62/base -> origin/gh/muchulee8/62/base 2025-09-07T07:34:58.3321299Z * [new branch] gh/muchulee8/62/head -> origin/gh/muchulee8/62/head 2025-09-07T07:34:58.3321462Z * [new branch] gh/muchulee8/62/orig -> origin/gh/muchulee8/62/orig 2025-09-07T07:34:58.3321616Z * [new branch] gh/muchulee8/63/base -> origin/gh/muchulee8/63/base 2025-09-07T07:34:58.3321769Z * [new branch] gh/muchulee8/63/head -> origin/gh/muchulee8/63/head 2025-09-07T07:34:58.3322055Z * [new branch] gh/muchulee8/63/orig -> origin/gh/muchulee8/63/orig 2025-09-07T07:34:58.3322209Z * [new branch] gh/muchulee8/64/base -> origin/gh/muchulee8/64/base 2025-09-07T07:34:58.3322359Z * [new branch] gh/muchulee8/64/head -> origin/gh/muchulee8/64/head 2025-09-07T07:34:58.3322516Z * [new branch] gh/muchulee8/64/orig -> origin/gh/muchulee8/64/orig 2025-09-07T07:34:58.3322667Z * [new branch] gh/muchulee8/65/base -> origin/gh/muchulee8/65/base 2025-09-07T07:34:58.3322816Z * [new branch] gh/muchulee8/65/head -> origin/gh/muchulee8/65/head 2025-09-07T07:34:58.3322974Z * [new branch] gh/muchulee8/65/orig -> origin/gh/muchulee8/65/orig 2025-09-07T07:34:58.3323147Z * [new branch] gh/naveenthangudu/1/base -> origin/gh/naveenthangudu/1/base 2025-09-07T07:34:58.3323335Z * [new branch] gh/naveenthangudu/1/head -> origin/gh/naveenthangudu/1/head 2025-09-07T07:34:58.3323504Z * [new branch] gh/naveenthangudu/1/orig -> origin/gh/naveenthangudu/1/orig 2025-09-07T07:34:58.3323668Z * [new branch] gh/naveenthangudu/2/base -> origin/gh/naveenthangudu/2/base 2025-09-07T07:34:58.3323833Z * [new branch] gh/naveenthangudu/2/head -> origin/gh/naveenthangudu/2/head 2025-09-07T07:34:58.3323988Z * [new branch] gh/naveenthangudu/2/orig -> origin/gh/naveenthangudu/2/orig 2025-09-07T07:34:58.3324153Z * [new branch] gh/naveenthangudu/3/base -> origin/gh/naveenthangudu/3/base 2025-09-07T07:34:58.3324312Z * [new branch] gh/naveenthangudu/3/head -> origin/gh/naveenthangudu/3/head 2025-09-07T07:34:58.3324472Z * [new branch] gh/naveenthangudu/3/orig -> origin/gh/naveenthangudu/3/orig 2025-09-07T07:34:58.3324623Z * [new branch] gh/naveenthangudu/4/base -> origin/gh/naveenthangudu/4/base 2025-09-07T07:34:58.3324783Z * [new branch] gh/naveenthangudu/4/head -> origin/gh/naveenthangudu/4/head 2025-09-07T07:34:58.3324941Z * [new branch] gh/naveenthangudu/4/orig -> origin/gh/naveenthangudu/4/orig 2025-09-07T07:34:58.3325096Z * [new branch] gh/naveenthangudu/5/base -> origin/gh/naveenthangudu/5/base 2025-09-07T07:34:58.3325256Z * [new branch] gh/naveenthangudu/5/head -> origin/gh/naveenthangudu/5/head 2025-09-07T07:34:58.3325407Z * [new branch] gh/naveenthangudu/5/orig -> origin/gh/naveenthangudu/5/orig 2025-09-07T07:34:58.3325567Z * [new branch] gh/naveenthangudu/6/base -> origin/gh/naveenthangudu/6/base 2025-09-07T07:34:58.3325777Z * [new branch] gh/naveenthangudu/6/head -> origin/gh/naveenthangudu/6/head 2025-09-07T07:34:58.3325938Z * [new branch] gh/naveenthangudu/6/orig -> origin/gh/naveenthangudu/6/orig 2025-09-07T07:34:58.3326079Z * [new branch] gh/oulgen/35/base -> origin/gh/oulgen/35/base 2025-09-07T07:34:58.3326224Z * [new branch] gh/oulgen/35/head -> origin/gh/oulgen/35/head 2025-09-07T07:34:58.3326367Z * [new branch] gh/oulgen/35/orig -> origin/gh/oulgen/35/orig 2025-09-07T07:34:58.3326499Z * [new branch] gh/oulgen/48/base -> origin/gh/oulgen/48/base 2025-09-07T07:34:58.3326651Z * [new branch] gh/oulgen/48/head -> origin/gh/oulgen/48/head 2025-09-07T07:34:58.3328520Z * [new branch] gh/oulgen/48/orig -> origin/gh/oulgen/48/orig 2025-09-07T07:34:58.3328806Z * [new branch] gh/oulgen/49/base -> origin/gh/oulgen/49/base 2025-09-07T07:34:58.3328970Z * [new branch] gh/oulgen/49/head -> origin/gh/oulgen/49/head 2025-09-07T07:34:58.3330051Z * [new branch] gh/oulgen/49/orig -> origin/gh/oulgen/49/orig 2025-09-07T07:34:58.3332848Z * [new branch] gh/pearu/108/base -> origin/gh/pearu/108/base 2025-09-07T07:34:58.3337507Z * [new branch] gh/pearu/108/head -> origin/gh/pearu/108/head 2025-09-07T07:34:58.3339135Z * [new branch] gh/pearu/108/orig -> origin/gh/pearu/108/orig 2025-09-07T07:34:58.3339287Z * [new branch] gh/pearu/109/base -> origin/gh/pearu/109/base 2025-09-07T07:34:58.3339423Z * [new branch] gh/pearu/109/head -> origin/gh/pearu/109/head 2025-09-07T07:34:58.3339575Z * [new branch] gh/pearu/109/orig -> origin/gh/pearu/109/orig 2025-09-07T07:34:58.3339722Z * [new branch] gh/pearu/110/base -> origin/gh/pearu/110/base 2025-09-07T07:34:58.3339862Z * [new branch] gh/pearu/110/head -> origin/gh/pearu/110/head 2025-09-07T07:34:58.3339996Z * [new branch] gh/pearu/110/orig -> origin/gh/pearu/110/orig 2025-09-07T07:34:58.3340135Z * [new branch] gh/pearu/111/base -> origin/gh/pearu/111/base 2025-09-07T07:34:58.3340273Z * [new branch] gh/pearu/111/head -> origin/gh/pearu/111/head 2025-09-07T07:34:58.3340406Z * [new branch] gh/pearu/111/orig -> origin/gh/pearu/111/orig 2025-09-07T07:34:58.3342728Z * [new branch] gh/pearu/112/base -> origin/gh/pearu/112/base 2025-09-07T07:34:58.3343234Z * [new branch] gh/pearu/112/head -> origin/gh/pearu/112/head 2025-09-07T07:34:58.3343406Z * [new branch] gh/pearu/112/orig -> origin/gh/pearu/112/orig 2025-09-07T07:34:58.3343582Z * [new branch] gh/pearu/113/base -> origin/gh/pearu/113/base 2025-09-07T07:34:58.3350503Z * [new branch] gh/pearu/113/head -> origin/gh/pearu/113/head 2025-09-07T07:34:58.3355137Z * [new branch] gh/pearu/113/orig -> origin/gh/pearu/113/orig 2025-09-07T07:34:58.3358783Z * [new branch] gh/pearu/114/base -> origin/gh/pearu/114/base 2025-09-07T07:34:58.3359088Z * [new branch] gh/pearu/114/head -> origin/gh/pearu/114/head 2025-09-07T07:34:58.3359292Z * [new branch] gh/pearu/114/orig -> origin/gh/pearu/114/orig 2025-09-07T07:34:58.3359465Z * [new branch] gh/pearu/115/base -> origin/gh/pearu/115/base 2025-09-07T07:34:58.3359631Z * [new branch] gh/pearu/115/head -> origin/gh/pearu/115/head 2025-09-07T07:34:58.3359773Z * [new branch] gh/pearu/115/orig -> origin/gh/pearu/115/orig 2025-09-07T07:34:58.3360299Z * [new branch] gh/pearu/116/base -> origin/gh/pearu/116/base 2025-09-07T07:34:58.3360990Z * [new branch] gh/pearu/116/head -> origin/gh/pearu/116/head 2025-09-07T07:34:58.3361180Z * [new branch] gh/pearu/116/orig -> origin/gh/pearu/116/orig 2025-09-07T07:34:58.3361329Z * [new branch] gh/pearu/117/base -> origin/gh/pearu/117/base 2025-09-07T07:34:58.3361490Z * [new branch] gh/pearu/117/head -> origin/gh/pearu/117/head 2025-09-07T07:34:58.3361642Z * [new branch] gh/pearu/117/orig -> origin/gh/pearu/117/orig 2025-09-07T07:34:58.3361796Z * [new branch] gh/pearu/56/base -> origin/gh/pearu/56/base 2025-09-07T07:34:58.3361939Z * [new branch] gh/pearu/56/head -> origin/gh/pearu/56/head 2025-09-07T07:34:58.3362088Z * [new branch] gh/pearu/56/orig -> origin/gh/pearu/56/orig 2025-09-07T07:34:58.3362237Z * [new branch] gh/pearu/97/base -> origin/gh/pearu/97/base 2025-09-07T07:34:58.3362379Z * [new branch] gh/pearu/97/head -> origin/gh/pearu/97/head 2025-09-07T07:34:58.3362512Z * [new branch] gh/pearu/97/orig -> origin/gh/pearu/97/orig 2025-09-07T07:34:58.3362652Z * [new branch] gh/qqaatw/29/base -> origin/gh/qqaatw/29/base 2025-09-07T07:34:58.3363026Z * [new branch] gh/qqaatw/29/head -> origin/gh/qqaatw/29/head 2025-09-07T07:34:58.3363171Z * [new branch] gh/qqaatw/29/orig -> origin/gh/qqaatw/29/orig 2025-09-07T07:34:58.3363351Z * [new branch] gh/raymo/refresh-script -> origin/gh/raymo/refresh-script 2025-09-07T07:34:58.3366351Z * [new branch] gh/rec/141/base -> origin/gh/rec/141/base 2025-09-07T07:34:58.3366602Z * [new branch] gh/rec/141/head -> origin/gh/rec/141/head 2025-09-07T07:34:58.3366843Z * [new branch] gh/rec/153/base -> origin/gh/rec/153/base 2025-09-07T07:34:58.3367069Z * [new branch] gh/rec/153/head -> origin/gh/rec/153/head 2025-09-07T07:34:58.3367208Z * [new branch] gh/rec/153/orig -> origin/gh/rec/153/orig 2025-09-07T07:34:58.3367350Z * [new branch] gh/rec/154/base -> origin/gh/rec/154/base 2025-09-07T07:34:58.3372916Z * [new branch] gh/rec/154/head -> origin/gh/rec/154/head 2025-09-07T07:34:58.3373081Z * [new branch] gh/rec/154/orig -> origin/gh/rec/154/orig 2025-09-07T07:34:58.3373236Z * [new branch] gh/rec/156/base -> origin/gh/rec/156/base 2025-09-07T07:34:58.3379124Z * [new branch] gh/rec/156/head -> origin/gh/rec/156/head 2025-09-07T07:34:58.3381410Z * [new branch] gh/rec/156/orig -> origin/gh/rec/156/orig 2025-09-07T07:34:58.3381598Z * [new branch] gh/rec/160/base -> origin/gh/rec/160/base 2025-09-07T07:34:58.3381749Z * [new branch] gh/rec/160/head -> origin/gh/rec/160/head 2025-09-07T07:34:58.3381886Z * [new branch] gh/rec/160/orig -> origin/gh/rec/160/orig 2025-09-07T07:34:58.3382015Z * [new branch] gh/rec/162/base -> origin/gh/rec/162/base 2025-09-07T07:34:58.3382146Z * [new branch] gh/rec/162/head -> origin/gh/rec/162/head 2025-09-07T07:34:58.3382284Z * [new branch] gh/rec/162/orig -> origin/gh/rec/162/orig 2025-09-07T07:34:58.3382414Z * [new branch] gh/rec/163/base -> origin/gh/rec/163/base 2025-09-07T07:34:58.3382551Z * [new branch] gh/rec/163/head -> origin/gh/rec/163/head 2025-09-07T07:34:58.3382679Z * [new branch] gh/rec/163/orig -> origin/gh/rec/163/orig 2025-09-07T07:34:58.3382816Z * [new branch] gh/rec/164/base -> origin/gh/rec/164/base 2025-09-07T07:34:58.3383097Z * [new branch] gh/rec/164/head -> origin/gh/rec/164/head 2025-09-07T07:34:58.3383227Z * [new branch] gh/rec/164/orig -> origin/gh/rec/164/orig 2025-09-07T07:34:58.3383364Z * [new branch] gh/rec/165/base -> origin/gh/rec/165/base 2025-09-07T07:34:58.3383496Z * [new branch] gh/rec/165/head -> origin/gh/rec/165/head 2025-09-07T07:34:58.3383632Z * [new branch] gh/rec/165/orig -> origin/gh/rec/165/orig 2025-09-07T07:34:58.3383758Z * [new branch] gh/rec/166/base -> origin/gh/rec/166/base 2025-09-07T07:34:58.3388736Z * [new branch] gh/rec/166/head -> origin/gh/rec/166/head 2025-09-07T07:34:58.3390197Z * [new branch] gh/rec/166/orig -> origin/gh/rec/166/orig 2025-09-07T07:34:58.3390409Z * [new branch] gh/robert-hardwick/1/base -> origin/gh/robert-hardwick/1/base 2025-09-07T07:34:58.3390601Z * [new branch] gh/robert-hardwick/1/head -> origin/gh/robert-hardwick/1/head 2025-09-07T07:34:58.3390759Z * [new branch] gh/robert-hardwick/1/orig -> origin/gh/robert-hardwick/1/orig 2025-09-07T07:34:58.3390927Z * [new branch] gh/robert-hardwick/2/base -> origin/gh/robert-hardwick/2/base 2025-09-07T07:34:58.3391216Z * [new branch] gh/robert-hardwick/2/head -> origin/gh/robert-hardwick/2/head 2025-09-07T07:34:58.3391389Z * [new branch] gh/robert-hardwick/2/orig -> origin/gh/robert-hardwick/2/orig 2025-09-07T07:34:58.3391549Z * [new branch] gh/robert-hardwick/3/base -> origin/gh/robert-hardwick/3/base 2025-09-07T07:34:58.3391709Z * [new branch] gh/robert-hardwick/3/head -> origin/gh/robert-hardwick/3/head 2025-09-07T07:34:58.3391875Z * [new branch] gh/robert-hardwick/3/orig -> origin/gh/robert-hardwick/3/orig 2025-09-07T07:34:58.3392591Z * [new branch] gh/robert-hardwick/4/base -> origin/gh/robert-hardwick/4/base 2025-09-07T07:34:58.3392797Z * [new branch] gh/robert-hardwick/4/head -> origin/gh/robert-hardwick/4/head 2025-09-07T07:34:58.3392978Z * [new branch] gh/robert-hardwick/4/orig -> origin/gh/robert-hardwick/4/orig 2025-09-07T07:34:58.3394084Z * [new branch] gh/rtimpe/1/base -> origin/gh/rtimpe/1/base 2025-09-07T07:34:58.3394428Z * [new branch] gh/rtimpe/1/head -> origin/gh/rtimpe/1/head 2025-09-07T07:34:58.3396969Z * [new branch] gh/rtimpe/10/base -> origin/gh/rtimpe/10/base 2025-09-07T07:34:58.3397345Z * [new branch] gh/rtimpe/10/head -> origin/gh/rtimpe/10/head 2025-09-07T07:34:58.3397486Z * [new branch] gh/rtimpe/10/orig -> origin/gh/rtimpe/10/orig 2025-09-07T07:34:58.3397749Z * [new branch] gh/rtimpe/11/base -> origin/gh/rtimpe/11/base 2025-09-07T07:34:58.3398765Z * [new branch] gh/rtimpe/11/head -> origin/gh/rtimpe/11/head 2025-09-07T07:34:58.3399229Z * [new branch] gh/rtimpe/11/orig -> origin/gh/rtimpe/11/orig 2025-09-07T07:34:58.3400405Z * [new branch] gh/rtimpe/12/base -> origin/gh/rtimpe/12/base 2025-09-07T07:34:58.3400655Z * [new branch] gh/rtimpe/12/head -> origin/gh/rtimpe/12/head 2025-09-07T07:34:58.3401677Z * [new branch] gh/rtimpe/12/orig -> origin/gh/rtimpe/12/orig 2025-09-07T07:34:58.3402916Z * [new branch] gh/rtimpe/13/base -> origin/gh/rtimpe/13/base 2025-09-07T07:34:58.3403303Z * [new branch] gh/rtimpe/13/head -> origin/gh/rtimpe/13/head 2025-09-07T07:34:58.3404256Z * [new branch] gh/rtimpe/13/orig -> origin/gh/rtimpe/13/orig 2025-09-07T07:34:58.3405194Z * [new branch] gh/rtimpe/14/base -> origin/gh/rtimpe/14/base 2025-09-07T07:34:58.3405563Z * [new branch] gh/rtimpe/14/head -> origin/gh/rtimpe/14/head 2025-09-07T07:34:58.3406616Z * [new branch] gh/rtimpe/14/orig -> origin/gh/rtimpe/14/orig 2025-09-07T07:34:58.3407822Z * [new branch] gh/rtimpe/15/base -> origin/gh/rtimpe/15/base 2025-09-07T07:34:58.3408460Z * [new branch] gh/rtimpe/15/head -> origin/gh/rtimpe/15/head 2025-09-07T07:34:58.3409035Z * [new branch] gh/rtimpe/15/orig -> origin/gh/rtimpe/15/orig 2025-09-07T07:34:58.3414452Z * [new branch] gh/rtimpe/2/base -> origin/gh/rtimpe/2/base 2025-09-07T07:34:58.3414634Z * [new branch] gh/rtimpe/2/head -> origin/gh/rtimpe/2/head 2025-09-07T07:34:58.3414781Z * [new branch] gh/rtimpe/3/base -> origin/gh/rtimpe/3/base 2025-09-07T07:34:58.3414924Z * [new branch] gh/rtimpe/3/head -> origin/gh/rtimpe/3/head 2025-09-07T07:34:58.3415073Z * [new branch] gh/rtimpe/4/base -> origin/gh/rtimpe/4/base 2025-09-07T07:34:58.3415222Z * [new branch] gh/rtimpe/4/head -> origin/gh/rtimpe/4/head 2025-09-07T07:34:58.3415365Z * [new branch] gh/rtimpe/9/base -> origin/gh/rtimpe/9/base 2025-09-07T07:34:58.3415553Z * [new branch] gh/rtimpe/9/head -> origin/gh/rtimpe/9/head 2025-09-07T07:34:58.3416706Z * [new branch] gh/rtimpe/9/orig -> origin/gh/rtimpe/9/orig 2025-09-07T07:34:58.3418136Z * [new branch] gh/ruisizhang123/1/base -> origin/gh/ruisizhang123/1/base 2025-09-07T07:34:58.3418318Z * [new branch] gh/ruisizhang123/1/head -> origin/gh/ruisizhang123/1/head 2025-09-07T07:34:58.3418793Z * [new branch] gh/ruisizhang123/1/orig -> origin/gh/ruisizhang123/1/orig 2025-09-07T07:34:58.3419821Z * [new branch] gh/ruisizhang123/4/base -> origin/gh/ruisizhang123/4/base 2025-09-07T07:34:58.3420155Z * [new branch] gh/ruisizhang123/4/head -> origin/gh/ruisizhang123/4/head 2025-09-07T07:34:58.3423264Z * [new branch] gh/ruisizhang123/4/orig -> origin/gh/ruisizhang123/4/orig 2025-09-07T07:34:58.3423450Z * [new branch] gh/ruisizhang123/5/base -> origin/gh/ruisizhang123/5/base 2025-09-07T07:34:58.3423619Z * [new branch] gh/ruisizhang123/5/head -> origin/gh/ruisizhang123/5/head 2025-09-07T07:34:58.3423769Z * [new branch] gh/ruisizhang123/5/orig -> origin/gh/ruisizhang123/5/orig 2025-09-07T07:34:58.3424271Z * [new branch] gh/ruisizhang123/6/base -> origin/gh/ruisizhang123/6/base 2025-09-07T07:34:58.3424723Z * [new branch] gh/ruisizhang123/6/head -> origin/gh/ruisizhang123/6/head 2025-09-07T07:34:58.3425613Z * [new branch] gh/ruisizhang123/6/orig -> origin/gh/ruisizhang123/6/orig 2025-09-07T07:34:58.3426488Z * [new branch] gh/ruisizhang123/7/base -> origin/gh/ruisizhang123/7/base 2025-09-07T07:34:58.3426962Z * [new branch] gh/ruisizhang123/7/head -> origin/gh/ruisizhang123/7/head 2025-09-07T07:34:58.3427877Z * [new branch] gh/ruisizhang123/7/orig -> origin/gh/ruisizhang123/7/orig 2025-09-07T07:34:58.3428634Z * [new branch] gh/ruisizhang123/8/base -> origin/gh/ruisizhang123/8/base 2025-09-07T07:34:58.3429123Z * [new branch] gh/ruisizhang123/8/head -> origin/gh/ruisizhang123/8/head 2025-09-07T07:34:58.3430097Z * [new branch] gh/ruisizhang123/8/orig -> origin/gh/ruisizhang123/8/orig 2025-09-07T07:34:58.3430854Z * [new branch] gh/ruisizhang123/9/base -> origin/gh/ruisizhang123/9/base 2025-09-07T07:34:58.3431433Z * [new branch] gh/ruisizhang123/9/head -> origin/gh/ruisizhang123/9/head 2025-09-07T07:34:58.3432374Z * [new branch] gh/ruisizhang123/9/orig -> origin/gh/ruisizhang123/9/orig 2025-09-07T07:34:58.3433547Z * [new branch] gh/sarckk/2/base -> origin/gh/sarckk/2/base 2025-09-07T07:34:58.3433838Z * [new branch] gh/sarckk/2/head -> origin/gh/sarckk/2/head 2025-09-07T07:34:58.3435091Z * [new branch] gh/sarckk/2/orig -> origin/gh/sarckk/2/orig 2025-09-07T07:34:58.3436127Z * [new branch] gh/seemethere/35/base -> origin/gh/seemethere/35/base 2025-09-07T07:34:58.3436408Z * [new branch] gh/seemethere/35/head -> origin/gh/seemethere/35/head 2025-09-07T07:34:58.3437532Z * [new branch] gh/seemethere/35/orig -> origin/gh/seemethere/35/orig 2025-09-07T07:34:58.3438368Z * [new branch] gh/seemethere/37/base -> origin/gh/seemethere/37/base 2025-09-07T07:34:58.3438831Z * [new branch] gh/seemethere/37/head -> origin/gh/seemethere/37/head 2025-09-07T07:34:58.3440017Z * [new branch] gh/seemethere/37/orig -> origin/gh/seemethere/37/orig 2025-09-07T07:34:58.3440968Z * [new branch] gh/seemethere/43/base -> origin/gh/seemethere/43/base 2025-09-07T07:34:58.3441339Z * [new branch] gh/seemethere/43/head -> origin/gh/seemethere/43/head 2025-09-07T07:34:58.3442386Z * [new branch] gh/seemethere/43/orig -> origin/gh/seemethere/43/orig 2025-09-07T07:34:58.3442989Z * [new branch] gh/seemethere/44/base -> origin/gh/seemethere/44/base 2025-09-07T07:34:58.3443914Z * [new branch] gh/seemethere/44/head -> origin/gh/seemethere/44/head 2025-09-07T07:34:58.3444907Z * [new branch] gh/seemethere/44/orig -> origin/gh/seemethere/44/orig 2025-09-07T07:34:58.3445622Z * [new branch] gh/seemethere/48/base -> origin/gh/seemethere/48/base 2025-09-07T07:34:58.3451936Z * [new branch] gh/seemethere/48/head -> origin/gh/seemethere/48/head 2025-09-07T07:34:58.3452380Z * [new branch] gh/seemethere/48/orig -> origin/gh/seemethere/48/orig 2025-09-07T07:34:58.3453579Z * [new branch] gh/seemethere/49/base -> origin/gh/seemethere/49/base 2025-09-07T07:34:58.3453931Z * [new branch] gh/seemethere/49/head -> origin/gh/seemethere/49/head 2025-09-07T07:34:58.3457444Z * [new branch] gh/seemethere/49/orig -> origin/gh/seemethere/49/orig 2025-09-07T07:34:58.3457649Z * [new branch] gh/seemethere/52/base -> origin/gh/seemethere/52/base 2025-09-07T07:34:58.3457814Z * [new branch] gh/seemethere/52/head -> origin/gh/seemethere/52/head 2025-09-07T07:34:58.3457968Z * [new branch] gh/seemethere/52/orig -> origin/gh/seemethere/52/orig 2025-09-07T07:34:58.3461233Z * [new branch] gh/seemethere/53/base -> origin/gh/seemethere/53/base 2025-09-07T07:34:58.3461513Z * [new branch] gh/seemethere/53/head -> origin/gh/seemethere/53/head 2025-09-07T07:34:58.3467622Z * [new branch] gh/seemethere/53/orig -> origin/gh/seemethere/53/orig 2025-09-07T07:34:58.3470020Z * [new branch] gh/seemethere/54/base -> origin/gh/seemethere/54/base 2025-09-07T07:34:58.3470232Z * [new branch] gh/seemethere/54/head -> origin/gh/seemethere/54/head 2025-09-07T07:34:58.3470487Z * [new branch] gh/seemethere/54/orig -> origin/gh/seemethere/54/orig 2025-09-07T07:34:58.3470740Z * [new branch] gh/seemethere/55/base -> origin/gh/seemethere/55/base 2025-09-07T07:34:58.3470898Z * [new branch] gh/seemethere/55/head -> origin/gh/seemethere/55/head 2025-09-07T07:34:58.3471122Z * [new branch] gh/seemethere/55/orig -> origin/gh/seemethere/55/orig 2025-09-07T07:34:58.3476316Z * [new branch] gh/seemethere/56/base -> origin/gh/seemethere/56/base 2025-09-07T07:34:58.3479406Z * [new branch] gh/seemethere/56/head -> origin/gh/seemethere/56/head 2025-09-07T07:34:58.3481586Z * [new branch] gh/seemethere/56/orig -> origin/gh/seemethere/56/orig 2025-09-07T07:34:58.3481789Z * [new branch] gh/seemethere/57/base -> origin/gh/seemethere/57/base 2025-09-07T07:34:58.3481950Z * [new branch] gh/seemethere/57/head -> origin/gh/seemethere/57/head 2025-09-07T07:34:58.3482118Z * [new branch] gh/seemethere/57/orig -> origin/gh/seemethere/57/orig 2025-09-07T07:34:58.3482271Z * [new branch] gh/seemethere/58/base -> origin/gh/seemethere/58/base 2025-09-07T07:34:58.3482432Z * [new branch] gh/seemethere/58/head -> origin/gh/seemethere/58/head 2025-09-07T07:34:58.3482588Z * [new branch] gh/seemethere/58/orig -> origin/gh/seemethere/58/orig 2025-09-07T07:34:58.3482744Z * [new branch] gh/seemethere/59/base -> origin/gh/seemethere/59/base 2025-09-07T07:34:58.3482940Z * [new branch] gh/seemethere/59/head -> origin/gh/seemethere/59/head 2025-09-07T07:34:58.3483101Z * [new branch] gh/seemethere/59/orig -> origin/gh/seemethere/59/orig 2025-09-07T07:34:58.3483256Z * [new branch] gh/seemethere/60/base -> origin/gh/seemethere/60/base 2025-09-07T07:34:58.3483406Z * [new branch] gh/seemethere/60/head -> origin/gh/seemethere/60/head 2025-09-07T07:34:58.3483779Z * [new branch] gh/seemethere/60/orig -> origin/gh/seemethere/60/orig 2025-09-07T07:34:58.3483941Z * [new branch] gh/seemethere/61/base -> origin/gh/seemethere/61/base 2025-09-07T07:34:58.3484099Z * [new branch] gh/seemethere/61/head -> origin/gh/seemethere/61/head 2025-09-07T07:34:58.3484249Z * [new branch] gh/seemethere/61/orig -> origin/gh/seemethere/61/orig 2025-09-07T07:34:58.3484400Z * [new branch] gh/seemethere/62/base -> origin/gh/seemethere/62/base 2025-09-07T07:34:58.3484567Z * [new branch] gh/seemethere/62/head -> origin/gh/seemethere/62/head 2025-09-07T07:34:58.3484720Z * [new branch] gh/seemethere/62/orig -> origin/gh/seemethere/62/orig 2025-09-07T07:34:58.3484874Z * [new branch] gh/seemethere/63/base -> origin/gh/seemethere/63/base 2025-09-07T07:34:58.3485032Z * [new branch] gh/seemethere/63/head -> origin/gh/seemethere/63/head 2025-09-07T07:34:58.3485204Z * [new branch] gh/seemethere/63/orig -> origin/gh/seemethere/63/orig 2025-09-07T07:34:58.3485388Z * [new branch] gh/shunting314/145/base -> origin/gh/shunting314/145/base 2025-09-07T07:34:58.3485554Z * [new branch] gh/shunting314/145/head -> origin/gh/shunting314/145/head 2025-09-07T07:34:58.3485722Z * [new branch] gh/shunting314/145/orig -> origin/gh/shunting314/145/orig 2025-09-07T07:34:58.3486417Z * [new branch] gh/shunting314/176/base -> origin/gh/shunting314/176/base 2025-09-07T07:34:58.3487636Z * [new branch] gh/shunting314/176/head -> origin/gh/shunting314/176/head 2025-09-07T07:34:58.3487959Z * [new branch] gh/shunting314/176/orig -> origin/gh/shunting314/176/orig 2025-09-07T07:34:58.3492919Z * [new branch] gh/shunting314/211/base -> origin/gh/shunting314/211/base 2025-09-07T07:34:58.3493134Z * [new branch] gh/shunting314/211/head -> origin/gh/shunting314/211/head 2025-09-07T07:34:58.3493309Z * [new branch] gh/shunting314/211/orig -> origin/gh/shunting314/211/orig 2025-09-07T07:34:58.3493472Z * [new branch] gh/shunting314/212/base -> origin/gh/shunting314/212/base 2025-09-07T07:34:58.3493640Z * [new branch] gh/shunting314/212/head -> origin/gh/shunting314/212/head 2025-09-07T07:34:58.3493798Z * [new branch] gh/shunting314/212/orig -> origin/gh/shunting314/212/orig 2025-09-07T07:34:58.3499584Z * [new branch] gh/shunting314/213/base -> origin/gh/shunting314/213/base 2025-09-07T07:34:58.3504495Z * [new branch] gh/shunting314/213/head -> origin/gh/shunting314/213/head 2025-09-07T07:34:58.3504836Z * [new branch] gh/shunting314/213/orig -> origin/gh/shunting314/213/orig 2025-09-07T07:34:58.3505038Z * [new branch] gh/shunting314/214/base -> origin/gh/shunting314/214/base 2025-09-07T07:34:58.3505253Z * [new branch] gh/shunting314/214/head -> origin/gh/shunting314/214/head 2025-09-07T07:34:58.3505413Z * [new branch] gh/shunting314/214/orig -> origin/gh/shunting314/214/orig 2025-09-07T07:34:58.3505668Z * [new branch] gh/shunting314/215/base -> origin/gh/shunting314/215/base 2025-09-07T07:34:58.3506380Z * [new branch] gh/shunting314/215/head -> origin/gh/shunting314/215/head 2025-09-07T07:34:58.3506620Z * [new branch] gh/shunting314/215/orig -> origin/gh/shunting314/215/orig 2025-09-07T07:34:58.3506823Z * [new branch] gh/shunting314/216/base -> origin/gh/shunting314/216/base 2025-09-07T07:34:58.3507001Z * [new branch] gh/shunting314/216/head -> origin/gh/shunting314/216/head 2025-09-07T07:34:58.3507166Z * [new branch] gh/shunting314/216/orig -> origin/gh/shunting314/216/orig 2025-09-07T07:34:58.3507522Z * [new branch] gh/shunting314/217/base -> origin/gh/shunting314/217/base 2025-09-07T07:34:58.3507689Z * [new branch] gh/shunting314/217/head -> origin/gh/shunting314/217/head 2025-09-07T07:34:58.3507860Z * [new branch] gh/shunting314/217/orig -> origin/gh/shunting314/217/orig 2025-09-07T07:34:58.3508043Z * [new branch] gh/shunting314/218/base -> origin/gh/shunting314/218/base 2025-09-07T07:34:58.3508378Z * [new branch] gh/shunting314/218/head -> origin/gh/shunting314/218/head 2025-09-07T07:34:58.3508606Z * [new branch] gh/shunting314/218/orig -> origin/gh/shunting314/218/orig 2025-09-07T07:34:58.3508886Z * [new branch] gh/shunting314/219/base -> origin/gh/shunting314/219/base 2025-09-07T07:34:58.3509156Z * [new branch] gh/shunting314/219/head -> origin/gh/shunting314/219/head 2025-09-07T07:34:58.3509424Z * [new branch] gh/shunting314/219/orig -> origin/gh/shunting314/219/orig 2025-09-07T07:34:58.3511163Z * [new branch] gh/shunting314/220/base -> origin/gh/shunting314/220/base 2025-09-07T07:34:58.3518080Z * [new branch] gh/shunting314/220/head -> origin/gh/shunting314/220/head 2025-09-07T07:34:58.3518421Z * [new branch] gh/shunting314/220/orig -> origin/gh/shunting314/220/orig 2025-09-07T07:34:58.3518668Z * [new branch] gh/shunting314/221/base -> origin/gh/shunting314/221/base 2025-09-07T07:34:58.3518901Z * [new branch] gh/shunting314/221/head -> origin/gh/shunting314/221/head 2025-09-07T07:34:58.3519108Z * [new branch] gh/shunting314/221/orig -> origin/gh/shunting314/221/orig 2025-09-07T07:34:58.3519253Z * [new branch] gh/shunting314/222/base -> origin/gh/shunting314/222/base 2025-09-07T07:34:58.3519534Z * [new branch] gh/shunting314/222/head -> origin/gh/shunting314/222/head 2025-09-07T07:34:58.3520079Z * [new branch] gh/shunting314/222/orig -> origin/gh/shunting314/222/orig 2025-09-07T07:34:58.3520270Z * [new branch] gh/shunting314/223/base -> origin/gh/shunting314/223/base 2025-09-07T07:34:58.3520428Z * [new branch] gh/shunting314/223/head -> origin/gh/shunting314/223/head 2025-09-07T07:34:58.3520574Z * [new branch] gh/shunting314/223/orig -> origin/gh/shunting314/223/orig 2025-09-07T07:34:58.3520737Z * [new branch] gh/silverguo/1/base -> origin/gh/silverguo/1/base 2025-09-07T07:34:58.3521046Z * [new branch] gh/silverguo/1/head -> origin/gh/silverguo/1/head 2025-09-07T07:34:58.3521185Z * [new branch] gh/silverguo/2/base -> origin/gh/silverguo/2/base 2025-09-07T07:34:58.3521332Z * [new branch] gh/silverguo/2/head -> origin/gh/silverguo/2/head 2025-09-07T07:34:58.3521476Z * [new branch] gh/silverguo/3/base -> origin/gh/silverguo/3/base 2025-09-07T07:34:58.3521627Z * [new branch] gh/silverguo/3/head -> origin/gh/silverguo/3/head 2025-09-07T07:34:58.3522709Z * [new branch] gh/silverguo/4/base -> origin/gh/silverguo/4/base 2025-09-07T07:34:58.3523258Z * [new branch] gh/silverguo/4/head -> origin/gh/silverguo/4/head 2025-09-07T07:34:58.3524505Z * [new branch] gh/sinhaanhsul/1/base -> origin/gh/sinhaanhsul/1/base 2025-09-07T07:34:58.3524812Z * [new branch] gh/sinhaanhsul/1/head -> origin/gh/sinhaanhsul/1/head 2025-09-07T07:34:58.3526315Z * [new branch] gh/skarjala/17/base -> origin/gh/skarjala/17/base 2025-09-07T07:34:58.3526780Z * [new branch] gh/skarjala/17/head -> origin/gh/skarjala/17/head 2025-09-07T07:34:58.3530442Z * [new branch] gh/skarjala/17/orig -> origin/gh/skarjala/17/orig 2025-09-07T07:34:58.3530858Z * [new branch] gh/skarjala/18/base -> origin/gh/skarjala/18/base 2025-09-07T07:34:58.3531167Z * [new branch] gh/skarjala/18/head -> origin/gh/skarjala/18/head 2025-09-07T07:34:58.3531320Z * [new branch] gh/skarjala/18/orig -> origin/gh/skarjala/18/orig 2025-09-07T07:34:58.3531487Z * [new branch] gh/skarjala/19/base -> origin/gh/skarjala/19/base 2025-09-07T07:34:58.3531963Z * [new branch] gh/skarjala/19/head -> origin/gh/skarjala/19/head 2025-09-07T07:34:58.3532659Z * [new branch] gh/skarjala/19/orig -> origin/gh/skarjala/19/orig 2025-09-07T07:34:58.3537414Z * [new branch] gh/slayton58/1/base -> origin/gh/slayton58/1/base 2025-09-07T07:34:58.3537615Z * [new branch] gh/slayton58/1/head -> origin/gh/slayton58/1/head 2025-09-07T07:34:58.3537768Z * [new branch] gh/slayton58/1/orig -> origin/gh/slayton58/1/orig 2025-09-07T07:34:58.3537922Z * [new branch] gh/slayton58/2/base -> origin/gh/slayton58/2/base 2025-09-07T07:34:58.3538085Z * [new branch] gh/slayton58/2/head -> origin/gh/slayton58/2/head 2025-09-07T07:34:58.3538232Z * [new branch] gh/slayton58/2/orig -> origin/gh/slayton58/2/orig 2025-09-07T07:34:58.3538449Z * [new branch] gh/slayton58/3/base -> origin/gh/slayton58/3/base 2025-09-07T07:34:58.3538645Z * [new branch] gh/slayton58/3/head -> origin/gh/slayton58/3/head 2025-09-07T07:34:58.3543031Z * [new branch] gh/slayton58/3/orig -> origin/gh/slayton58/3/orig 2025-09-07T07:34:58.3543224Z * [new branch] gh/slayton58/4/base -> origin/gh/slayton58/4/base 2025-09-07T07:34:58.3543373Z * [new branch] gh/slayton58/4/head -> origin/gh/slayton58/4/head 2025-09-07T07:34:58.3543508Z * [new branch] gh/slayton58/4/orig -> origin/gh/slayton58/4/orig 2025-09-07T07:34:58.3543653Z * [new branch] gh/slayton58/5/base -> origin/gh/slayton58/5/base 2025-09-07T07:34:58.3543951Z * [new branch] gh/slayton58/5/head -> origin/gh/slayton58/5/head 2025-09-07T07:34:58.3544110Z * [new branch] gh/slayton58/5/orig -> origin/gh/slayton58/5/orig 2025-09-07T07:34:58.3552990Z * [new branch] gh/soulitzer/269/base -> origin/gh/soulitzer/269/base 2025-09-07T07:34:58.3553522Z * [new branch] gh/soulitzer/269/head -> origin/gh/soulitzer/269/head 2025-09-07T07:34:58.3553695Z * [new branch] gh/soulitzer/269/orig -> origin/gh/soulitzer/269/orig 2025-09-07T07:34:58.3554040Z * [new branch] gh/soulitzer/276/base -> origin/gh/soulitzer/276/base 2025-09-07T07:34:58.3554325Z * [new branch] gh/soulitzer/276/head -> origin/gh/soulitzer/276/head 2025-09-07T07:34:58.3554509Z * [new branch] gh/soulitzer/276/orig -> origin/gh/soulitzer/276/orig 2025-09-07T07:34:58.3554675Z * [new branch] gh/soulitzer/287/base -> origin/gh/soulitzer/287/base 2025-09-07T07:34:58.3554843Z * [new branch] gh/soulitzer/287/head -> origin/gh/soulitzer/287/head 2025-09-07T07:34:58.3555001Z * [new branch] gh/soulitzer/287/orig -> origin/gh/soulitzer/287/orig 2025-09-07T07:34:58.3555172Z * [new branch] gh/soulitzer/296/base -> origin/gh/soulitzer/296/base 2025-09-07T07:34:58.3555351Z * [new branch] gh/soulitzer/296/head -> origin/gh/soulitzer/296/head 2025-09-07T07:34:58.3555509Z * [new branch] gh/soulitzer/296/orig -> origin/gh/soulitzer/296/orig 2025-09-07T07:34:58.3555663Z * [new branch] gh/soulitzer/299/base -> origin/gh/soulitzer/299/base 2025-09-07T07:34:58.3555851Z * [new branch] gh/soulitzer/299/head -> origin/gh/soulitzer/299/head 2025-09-07T07:34:58.3562322Z * [new branch] gh/soulitzer/299/orig -> origin/gh/soulitzer/299/orig 2025-09-07T07:34:58.3562863Z * [new branch] gh/soulitzer/300/base -> origin/gh/soulitzer/300/base 2025-09-07T07:34:58.3563164Z * [new branch] gh/soulitzer/300/head -> origin/gh/soulitzer/300/head 2025-09-07T07:34:58.3563358Z * [new branch] gh/soulitzer/300/orig -> origin/gh/soulitzer/300/orig 2025-09-07T07:34:58.3563545Z * [new branch] gh/soulitzer/301/base -> origin/gh/soulitzer/301/base 2025-09-07T07:34:58.3564190Z * [new branch] gh/soulitzer/301/head -> origin/gh/soulitzer/301/head 2025-09-07T07:34:58.3564416Z * [new branch] gh/soulitzer/301/orig -> origin/gh/soulitzer/301/orig 2025-09-07T07:34:58.3564567Z * [new branch] gh/soulitzer/313/base -> origin/gh/soulitzer/313/base 2025-09-07T07:34:58.3564719Z * [new branch] gh/soulitzer/313/head -> origin/gh/soulitzer/313/head 2025-09-07T07:34:58.3564865Z * [new branch] gh/soulitzer/313/orig -> origin/gh/soulitzer/313/orig 2025-09-07T07:34:58.3565041Z * [new branch] gh/soulitzer/319/base -> origin/gh/soulitzer/319/base 2025-09-07T07:34:58.3565189Z * [new branch] gh/soulitzer/319/head -> origin/gh/soulitzer/319/head 2025-09-07T07:34:58.3565341Z * [new branch] gh/soulitzer/319/orig -> origin/gh/soulitzer/319/orig 2025-09-07T07:34:58.3566425Z * [new branch] gh/soulitzer/320/base -> origin/gh/soulitzer/320/base 2025-09-07T07:34:58.3566631Z * [new branch] gh/soulitzer/320/head -> origin/gh/soulitzer/320/head 2025-09-07T07:34:58.3567873Z * [new branch] gh/soulitzer/320/orig -> origin/gh/soulitzer/320/orig 2025-09-07T07:34:58.3568987Z * [new branch] gh/soulitzer/336/base -> origin/gh/soulitzer/336/base 2025-09-07T07:34:58.3569375Z * [new branch] gh/soulitzer/336/head -> origin/gh/soulitzer/336/head 2025-09-07T07:34:58.3569960Z * [new branch] gh/soulitzer/336/orig -> origin/gh/soulitzer/336/orig 2025-09-07T07:34:58.3571139Z * [new branch] gh/soulitzer/347/base -> origin/gh/soulitzer/347/base 2025-09-07T07:34:58.3571446Z * [new branch] gh/soulitzer/347/head -> origin/gh/soulitzer/347/head 2025-09-07T07:34:58.3573402Z * [new branch] gh/soulitzer/347/orig -> origin/gh/soulitzer/347/orig 2025-09-07T07:34:58.3573598Z * [new branch] gh/soulitzer/349/base -> origin/gh/soulitzer/349/base 2025-09-07T07:34:58.3574004Z * [new branch] gh/soulitzer/349/head -> origin/gh/soulitzer/349/head 2025-09-07T07:34:58.3574651Z * [new branch] gh/soulitzer/349/orig -> origin/gh/soulitzer/349/orig 2025-09-07T07:34:58.3575682Z * [new branch] gh/soulitzer/350/base -> origin/gh/soulitzer/350/base 2025-09-07T07:34:58.3575905Z * [new branch] gh/soulitzer/350/head -> origin/gh/soulitzer/350/head 2025-09-07T07:34:58.3576922Z * [new branch] gh/soulitzer/350/orig -> origin/gh/soulitzer/350/orig 2025-09-07T07:34:58.3581331Z * [new branch] gh/soulitzer/351/base -> origin/gh/soulitzer/351/base 2025-09-07T07:34:58.3586195Z * [new branch] gh/soulitzer/351/head -> origin/gh/soulitzer/351/head 2025-09-07T07:34:58.3591637Z * [new branch] gh/soulitzer/351/orig -> origin/gh/soulitzer/351/orig 2025-09-07T07:34:58.3593622Z * [new branch] gh/soulitzer/353/base -> origin/gh/soulitzer/353/base 2025-09-07T07:34:58.3593780Z * [new branch] gh/soulitzer/353/head -> origin/gh/soulitzer/353/head 2025-09-07T07:34:58.3593923Z * [new branch] gh/soulitzer/353/orig -> origin/gh/soulitzer/353/orig 2025-09-07T07:34:58.3594129Z * [new branch] gh/soulitzer/358/base -> origin/gh/soulitzer/358/base 2025-09-07T07:34:58.3600133Z * [new branch] gh/soulitzer/358/head -> origin/gh/soulitzer/358/head 2025-09-07T07:34:58.3602009Z * [new branch] gh/soulitzer/358/orig -> origin/gh/soulitzer/358/orig 2025-09-07T07:34:58.3602185Z * [new branch] gh/soulitzer/359/base -> origin/gh/soulitzer/359/base 2025-09-07T07:34:58.3602333Z * [new branch] gh/soulitzer/359/head -> origin/gh/soulitzer/359/head 2025-09-07T07:34:58.3602473Z * [new branch] gh/soulitzer/359/orig -> origin/gh/soulitzer/359/orig 2025-09-07T07:34:58.3602624Z * [new branch] gh/soulitzer/362/base -> origin/gh/soulitzer/362/base 2025-09-07T07:34:58.3602781Z * [new branch] gh/soulitzer/362/head -> origin/gh/soulitzer/362/head 2025-09-07T07:34:58.3602919Z * [new branch] gh/soulitzer/362/orig -> origin/gh/soulitzer/362/orig 2025-09-07T07:34:58.3603063Z * [new branch] gh/soulitzer/372/base -> origin/gh/soulitzer/372/base 2025-09-07T07:34:58.3603203Z * [new branch] gh/soulitzer/372/head -> origin/gh/soulitzer/372/head 2025-09-07T07:34:58.3603346Z * [new branch] gh/soulitzer/372/orig -> origin/gh/soulitzer/372/orig 2025-09-07T07:34:58.3603481Z * [new branch] gh/soulitzer/373/base -> origin/gh/soulitzer/373/base 2025-09-07T07:34:58.3603628Z * [new branch] gh/soulitzer/373/head -> origin/gh/soulitzer/373/head 2025-09-07T07:34:58.3603776Z * [new branch] gh/soulitzer/373/orig -> origin/gh/soulitzer/373/orig 2025-09-07T07:34:58.3603923Z * [new branch] gh/soulitzer/374/base -> origin/gh/soulitzer/374/base 2025-09-07T07:34:58.3604076Z * [new branch] gh/soulitzer/374/head -> origin/gh/soulitzer/374/head 2025-09-07T07:34:58.3604222Z * [new branch] gh/soulitzer/374/orig -> origin/gh/soulitzer/374/orig 2025-09-07T07:34:58.3604372Z * [new branch] gh/soulitzer/375/base -> origin/gh/soulitzer/375/base 2025-09-07T07:34:58.3604520Z * [new branch] gh/soulitzer/375/head -> origin/gh/soulitzer/375/head 2025-09-07T07:34:58.3604664Z * [new branch] gh/soulitzer/375/orig -> origin/gh/soulitzer/375/orig 2025-09-07T07:34:58.3604815Z * [new branch] gh/soulitzer/376/base -> origin/gh/soulitzer/376/base 2025-09-07T07:34:58.3604958Z * [new branch] gh/soulitzer/376/head -> origin/gh/soulitzer/376/head 2025-09-07T07:34:58.3605111Z * [new branch] gh/soulitzer/376/orig -> origin/gh/soulitzer/376/orig 2025-09-07T07:34:58.3605302Z * [new branch] gh/soulitzer/377/base -> origin/gh/soulitzer/377/base 2025-09-07T07:34:58.3605452Z * [new branch] gh/soulitzer/377/head -> origin/gh/soulitzer/377/head 2025-09-07T07:34:58.3605594Z * [new branch] gh/soulitzer/377/orig -> origin/gh/soulitzer/377/orig 2025-09-07T07:34:58.3605742Z * [new branch] gh/soulitzer/378/base -> origin/gh/soulitzer/378/base 2025-09-07T07:34:58.3605894Z * [new branch] gh/soulitzer/378/head -> origin/gh/soulitzer/378/head 2025-09-07T07:34:58.3606036Z * [new branch] gh/soulitzer/378/orig -> origin/gh/soulitzer/378/orig 2025-09-07T07:34:58.3606186Z * [new branch] gh/soulitzer/379/base -> origin/gh/soulitzer/379/base 2025-09-07T07:34:58.3606979Z * [new branch] gh/soulitzer/379/head -> origin/gh/soulitzer/379/head 2025-09-07T07:34:58.3607219Z * [new branch] gh/soulitzer/379/orig -> origin/gh/soulitzer/379/orig 2025-09-07T07:34:58.3612660Z * [new branch] gh/swolchok/728/next -> origin/gh/swolchok/728/next 2025-09-07T07:34:58.3616832Z * [new branch] gh/swolchok/767/base -> origin/gh/swolchok/767/base 2025-09-07T07:34:58.3618648Z * [new branch] gh/swolchok/767/head -> origin/gh/swolchok/767/head 2025-09-07T07:34:58.3619145Z * [new branch] gh/swolchok/767/orig -> origin/gh/swolchok/767/orig 2025-09-07T07:34:58.3619336Z * [new branch] gh/swolchok/768/base -> origin/gh/swolchok/768/base 2025-09-07T07:34:58.3619489Z * [new branch] gh/swolchok/768/head -> origin/gh/swolchok/768/head 2025-09-07T07:34:58.3619631Z * [new branch] gh/swolchok/768/orig -> origin/gh/swolchok/768/orig 2025-09-07T07:34:58.3619792Z * [new branch] gh/swolchok/769/base -> origin/gh/swolchok/769/base 2025-09-07T07:34:58.3619936Z * [new branch] gh/swolchok/769/head -> origin/gh/swolchok/769/head 2025-09-07T07:34:58.3620092Z * [new branch] gh/swolchok/769/orig -> origin/gh/swolchok/769/orig 2025-09-07T07:34:58.3620239Z * [new branch] gh/swolchok/771/base -> origin/gh/swolchok/771/base 2025-09-07T07:34:58.3620397Z * [new branch] gh/swolchok/771/head -> origin/gh/swolchok/771/head 2025-09-07T07:34:58.3620548Z * [new branch] gh/swolchok/771/orig -> origin/gh/swolchok/771/orig 2025-09-07T07:34:58.3623548Z * [new branch] gh/swolchok/772/base -> origin/gh/swolchok/772/base 2025-09-07T07:34:58.3623824Z * [new branch] gh/swolchok/772/head -> origin/gh/swolchok/772/head 2025-09-07T07:34:58.3623976Z * [new branch] gh/swolchok/772/orig -> origin/gh/swolchok/772/orig 2025-09-07T07:34:58.3624118Z * [new branch] gh/swolchok/773/base -> origin/gh/swolchok/773/base 2025-09-07T07:34:58.3624272Z * [new branch] gh/swolchok/773/head -> origin/gh/swolchok/773/head 2025-09-07T07:34:58.3624407Z * [new branch] gh/swolchok/773/orig -> origin/gh/swolchok/773/orig 2025-09-07T07:34:58.3625330Z * [new branch] gh/swolchok/786/base -> origin/gh/swolchok/786/base 2025-09-07T07:34:58.3625787Z * [new branch] gh/swolchok/786/head -> origin/gh/swolchok/786/head 2025-09-07T07:34:58.3626713Z * [new branch] gh/swolchok/786/orig -> origin/gh/swolchok/786/orig 2025-09-07T07:34:58.3627289Z * [new branch] gh/swolchok/787/base -> origin/gh/swolchok/787/base 2025-09-07T07:34:58.3628148Z * [new branch] gh/swolchok/787/head -> origin/gh/swolchok/787/head 2025-09-07T07:34:58.3628466Z * [new branch] gh/swolchok/787/orig -> origin/gh/swolchok/787/orig 2025-09-07T07:34:58.3629698Z * [new branch] gh/swolchok/788/base -> origin/gh/swolchok/788/base 2025-09-07T07:34:58.3630163Z * [new branch] gh/swolchok/788/head -> origin/gh/swolchok/788/head 2025-09-07T07:34:58.3631100Z * [new branch] gh/swolchok/788/orig -> origin/gh/swolchok/788/orig 2025-09-07T07:34:58.3632119Z * [new branch] gh/swolchok/789/base -> origin/gh/swolchok/789/base 2025-09-07T07:34:58.3632366Z * [new branch] gh/swolchok/789/head -> origin/gh/swolchok/789/head 2025-09-07T07:34:58.3633237Z * [new branch] gh/swolchok/789/orig -> origin/gh/swolchok/789/orig 2025-09-07T07:34:58.3634083Z * [new branch] gh/swolchok/790/base -> origin/gh/swolchok/790/base 2025-09-07T07:34:58.3634469Z * [new branch] gh/swolchok/790/head -> origin/gh/swolchok/790/head 2025-09-07T07:34:58.3635412Z * [new branch] gh/swolchok/790/orig -> origin/gh/swolchok/790/orig 2025-09-07T07:34:58.3636423Z * [new branch] gh/swolchok/791/base -> origin/gh/swolchok/791/base 2025-09-07T07:34:58.3636686Z * [new branch] gh/swolchok/791/head -> origin/gh/swolchok/791/head 2025-09-07T07:34:58.3638080Z * [new branch] gh/swolchok/791/orig -> origin/gh/swolchok/791/orig 2025-09-07T07:34:58.3639032Z * [new branch] gh/swolchok/792/base -> origin/gh/swolchok/792/base 2025-09-07T07:34:58.3639359Z * [new branch] gh/swolchok/792/head -> origin/gh/swolchok/792/head 2025-09-07T07:34:58.3640257Z * [new branch] gh/swolchok/792/orig -> origin/gh/swolchok/792/orig 2025-09-07T07:34:58.3641234Z * [new branch] gh/swolchok/793/base -> origin/gh/swolchok/793/base 2025-09-07T07:34:58.3641612Z * [new branch] gh/swolchok/793/head -> origin/gh/swolchok/793/head 2025-09-07T07:34:58.3642952Z * [new branch] gh/swolchok/793/orig -> origin/gh/swolchok/793/orig 2025-09-07T07:34:58.3643918Z * [new branch] gh/swolchok/794/base -> origin/gh/swolchok/794/base 2025-09-07T07:34:58.3644318Z * [new branch] gh/swolchok/794/head -> origin/gh/swolchok/794/head 2025-09-07T07:34:58.3645454Z * [new branch] gh/swolchok/794/orig -> origin/gh/swolchok/794/orig 2025-09-07T07:34:58.3646880Z * [new branch] gh/swolchok/795/base -> origin/gh/swolchok/795/base 2025-09-07T07:34:58.3647531Z * [new branch] gh/swolchok/795/head -> origin/gh/swolchok/795/head 2025-09-07T07:34:58.3649125Z * [new branch] gh/swolchok/795/orig -> origin/gh/swolchok/795/orig 2025-09-07T07:34:58.3649830Z * [new branch] gh/swolchok/796/base -> origin/gh/swolchok/796/base 2025-09-07T07:34:58.3650003Z * [new branch] gh/swolchok/796/head -> origin/gh/swolchok/796/head 2025-09-07T07:34:58.3650666Z * [new branch] gh/swolchok/796/orig -> origin/gh/swolchok/796/orig 2025-09-07T07:34:58.3654971Z * [new branch] gh/swolchok/797/base -> origin/gh/swolchok/797/base 2025-09-07T07:34:58.3655184Z * [new branch] gh/swolchok/797/head -> origin/gh/swolchok/797/head 2025-09-07T07:34:58.3655369Z * [new branch] gh/swolchok/797/orig -> origin/gh/swolchok/797/orig 2025-09-07T07:34:58.3655522Z * [new branch] gh/swolchok/798/base -> origin/gh/swolchok/798/base 2025-09-07T07:34:58.3655698Z * [new branch] gh/swolchok/798/head -> origin/gh/swolchok/798/head 2025-09-07T07:34:58.3655854Z * [new branch] gh/swolchok/798/orig -> origin/gh/swolchok/798/orig 2025-09-07T07:34:58.3657058Z * [new branch] gh/swolchok/799/base -> origin/gh/swolchok/799/base 2025-09-07T07:34:58.3657296Z * [new branch] gh/swolchok/799/head -> origin/gh/swolchok/799/head 2025-09-07T07:34:58.3660544Z * [new branch] gh/swolchok/799/orig -> origin/gh/swolchok/799/orig 2025-09-07T07:34:58.3660982Z * [new branch] gh/swolchok/800/base -> origin/gh/swolchok/800/base 2025-09-07T07:34:58.3661148Z * [new branch] gh/swolchok/800/head -> origin/gh/swolchok/800/head 2025-09-07T07:34:58.3661301Z * [new branch] gh/swolchok/800/orig -> origin/gh/swolchok/800/orig 2025-09-07T07:34:58.3662708Z * [new branch] gh/swolchok/801/base -> origin/gh/swolchok/801/base 2025-09-07T07:34:58.3662991Z * [new branch] gh/swolchok/801/head -> origin/gh/swolchok/801/head 2025-09-07T07:34:58.3666755Z * [new branch] gh/swolchok/801/orig -> origin/gh/swolchok/801/orig 2025-09-07T07:34:58.3671960Z * [new branch] gh/swolchok/802/base -> origin/gh/swolchok/802/base 2025-09-07T07:34:58.3676210Z * [new branch] gh/swolchok/802/head -> origin/gh/swolchok/802/head 2025-09-07T07:34:58.3681024Z * [new branch] gh/swolchok/802/orig -> origin/gh/swolchok/802/orig 2025-09-07T07:34:58.3681244Z * [new branch] gh/swolchok/803/base -> origin/gh/swolchok/803/base 2025-09-07T07:34:58.3681412Z * [new branch] gh/swolchok/803/head -> origin/gh/swolchok/803/head 2025-09-07T07:34:58.3681563Z * [new branch] gh/swolchok/803/orig -> origin/gh/swolchok/803/orig 2025-09-07T07:34:58.3681918Z * [new branch] gh/swolchok/804/base -> origin/gh/swolchok/804/base 2025-09-07T07:34:58.3682072Z * [new branch] gh/swolchok/804/head -> origin/gh/swolchok/804/head 2025-09-07T07:34:58.3682219Z * [new branch] gh/swolchok/804/orig -> origin/gh/swolchok/804/orig 2025-09-07T07:34:58.3682376Z * [new branch] gh/swolchok/805/base -> origin/gh/swolchok/805/base 2025-09-07T07:34:58.3682521Z * [new branch] gh/swolchok/805/head -> origin/gh/swolchok/805/head 2025-09-07T07:34:58.3682672Z * [new branch] gh/swolchok/805/orig -> origin/gh/swolchok/805/orig 2025-09-07T07:34:58.3682824Z * [new branch] gh/swolchok/806/base -> origin/gh/swolchok/806/base 2025-09-07T07:34:58.3682970Z * [new branch] gh/swolchok/806/head -> origin/gh/swolchok/806/head 2025-09-07T07:34:58.3683122Z * [new branch] gh/swolchok/806/orig -> origin/gh/swolchok/806/orig 2025-09-07T07:34:58.3683269Z * [new branch] gh/swolchok/807/base -> origin/gh/swolchok/807/base 2025-09-07T07:34:58.3683419Z * [new branch] gh/swolchok/807/head -> origin/gh/swolchok/807/head 2025-09-07T07:34:58.3683562Z * [new branch] gh/swolchok/807/orig -> origin/gh/swolchok/807/orig 2025-09-07T07:34:58.3683710Z * [new branch] gh/swolchok/808/base -> origin/gh/swolchok/808/base 2025-09-07T07:34:58.3683854Z * [new branch] gh/swolchok/808/head -> origin/gh/swolchok/808/head 2025-09-07T07:34:58.3684005Z * [new branch] gh/swolchok/808/orig -> origin/gh/swolchok/808/orig 2025-09-07T07:34:58.3684154Z * [new branch] gh/swolchok/809/base -> origin/gh/swolchok/809/base 2025-09-07T07:34:58.3684297Z * [new branch] gh/swolchok/809/head -> origin/gh/swolchok/809/head 2025-09-07T07:34:58.3684448Z * [new branch] gh/swolchok/809/orig -> origin/gh/swolchok/809/orig 2025-09-07T07:34:58.3685059Z * [new branch] gh/swolchok/810/base -> origin/gh/swolchok/810/base 2025-09-07T07:34:58.3685650Z * [new branch] gh/swolchok/810/head -> origin/gh/swolchok/810/head 2025-09-07T07:34:58.3687140Z * [new branch] gh/swolchok/810/orig -> origin/gh/swolchok/810/orig 2025-09-07T07:34:58.3687529Z * [new branch] gh/swolchok/811/base -> origin/gh/swolchok/811/base 2025-09-07T07:34:58.3691467Z * [new branch] gh/swolchok/811/head -> origin/gh/swolchok/811/head 2025-09-07T07:34:58.3691792Z * [new branch] gh/swolchok/811/orig -> origin/gh/swolchok/811/orig 2025-09-07T07:34:58.3691950Z * [new branch] gh/swolchok/812/base -> origin/gh/swolchok/812/base 2025-09-07T07:34:58.3692100Z * [new branch] gh/swolchok/812/head -> origin/gh/swolchok/812/head 2025-09-07T07:34:58.3692258Z * [new branch] gh/swolchok/812/orig -> origin/gh/swolchok/812/orig 2025-09-07T07:34:58.3695748Z * [new branch] gh/swolchok/813/base -> origin/gh/swolchok/813/base 2025-09-07T07:34:58.3695902Z * [new branch] gh/swolchok/813/head -> origin/gh/swolchok/813/head 2025-09-07T07:34:58.3696051Z * [new branch] gh/swolchok/813/orig -> origin/gh/swolchok/813/orig 2025-09-07T07:34:58.3696200Z * [new branch] gh/swolchok/814/base -> origin/gh/swolchok/814/base 2025-09-07T07:34:58.3696356Z * [new branch] gh/swolchok/814/head -> origin/gh/swolchok/814/head 2025-09-07T07:34:58.3699551Z * [new branch] gh/swolchok/814/orig -> origin/gh/swolchok/814/orig 2025-09-07T07:34:58.3699740Z * [new branch] gh/swolchok/815/base -> origin/gh/swolchok/815/base 2025-09-07T07:34:58.3699887Z * [new branch] gh/swolchok/815/head -> origin/gh/swolchok/815/head 2025-09-07T07:34:58.3700098Z * [new branch] gh/swolchok/815/orig -> origin/gh/swolchok/815/orig 2025-09-07T07:34:58.3700253Z * [new branch] gh/swolchok/816/base -> origin/gh/swolchok/816/base 2025-09-07T07:34:58.3700416Z * [new branch] gh/swolchok/816/head -> origin/gh/swolchok/816/head 2025-09-07T07:34:58.3700858Z * [new branch] gh/swolchok/816/orig -> origin/gh/swolchok/816/orig 2025-09-07T07:34:58.3703577Z * [new branch] gh/swolchok/817/base -> origin/gh/swolchok/817/base 2025-09-07T07:34:58.3703781Z * [new branch] gh/swolchok/817/head -> origin/gh/swolchok/817/head 2025-09-07T07:34:58.3703958Z * [new branch] gh/swolchok/817/orig -> origin/gh/swolchok/817/orig 2025-09-07T07:34:58.3704373Z * [new branch] gh/swolchok/818/base -> origin/gh/swolchok/818/base 2025-09-07T07:34:58.3704889Z * [new branch] gh/swolchok/818/head -> origin/gh/swolchok/818/head 2025-09-07T07:34:58.3706059Z * [new branch] gh/swolchok/818/orig -> origin/gh/swolchok/818/orig 2025-09-07T07:34:58.3706662Z * [new branch] gh/swolchok/819/base -> origin/gh/swolchok/819/base 2025-09-07T07:34:58.3707257Z * [new branch] gh/swolchok/819/head -> origin/gh/swolchok/819/head 2025-09-07T07:34:58.3708263Z * [new branch] gh/swolchok/819/orig -> origin/gh/swolchok/819/orig 2025-09-07T07:34:58.3709242Z * [new branch] gh/swolchok/820/base -> origin/gh/swolchok/820/base 2025-09-07T07:34:58.3709415Z * [new branch] gh/swolchok/820/head -> origin/gh/swolchok/820/head 2025-09-07T07:34:58.3712767Z * [new branch] gh/swolchok/820/orig -> origin/gh/swolchok/820/orig 2025-09-07T07:34:58.3714890Z * [new branch] gh/swolchok/821/base -> origin/gh/swolchok/821/base 2025-09-07T07:34:58.3715075Z * [new branch] gh/swolchok/821/head -> origin/gh/swolchok/821/head 2025-09-07T07:34:58.3715249Z * [new branch] gh/swolchok/821/orig -> origin/gh/swolchok/821/orig 2025-09-07T07:34:58.3715437Z * [new branch] gh/swolchok/822/base -> origin/gh/swolchok/822/base 2025-09-07T07:34:58.3715593Z * [new branch] gh/swolchok/822/head -> origin/gh/swolchok/822/head 2025-09-07T07:34:58.3715744Z * [new branch] gh/swolchok/822/orig -> origin/gh/swolchok/822/orig 2025-09-07T07:34:58.3717414Z * [new branch] gh/swolchok/823/base -> origin/gh/swolchok/823/base 2025-09-07T07:34:58.3717964Z * [new branch] gh/swolchok/823/head -> origin/gh/swolchok/823/head 2025-09-07T07:34:58.3718215Z * [new branch] gh/swolchok/823/orig -> origin/gh/swolchok/823/orig 2025-09-07T07:34:58.3718473Z * [new branch] gh/swolchok/824/base -> origin/gh/swolchok/824/base 2025-09-07T07:34:58.3718925Z * [new branch] gh/swolchok/824/head -> origin/gh/swolchok/824/head 2025-09-07T07:34:58.3720850Z * [new branch] gh/swolchok/824/orig -> origin/gh/swolchok/824/orig 2025-09-07T07:34:58.3721037Z * [new branch] gh/swolchok/825/base -> origin/gh/swolchok/825/base 2025-09-07T07:34:58.3721646Z * [new branch] gh/swolchok/825/head -> origin/gh/swolchok/825/head 2025-09-07T07:34:58.3722456Z * [new branch] gh/swolchok/825/orig -> origin/gh/swolchok/825/orig 2025-09-07T07:34:58.3723654Z * [new branch] gh/swolchok/826/base -> origin/gh/swolchok/826/base 2025-09-07T07:34:58.3723930Z * [new branch] gh/swolchok/826/head -> origin/gh/swolchok/826/head 2025-09-07T07:34:58.3724804Z * [new branch] gh/swolchok/826/orig -> origin/gh/swolchok/826/orig 2025-09-07T07:34:58.3725925Z * [new branch] gh/swolchok/827/base -> origin/gh/swolchok/827/base 2025-09-07T07:34:58.3726419Z * [new branch] gh/swolchok/827/head -> origin/gh/swolchok/827/head 2025-09-07T07:34:58.3727178Z * [new branch] gh/swolchok/827/orig -> origin/gh/swolchok/827/orig 2025-09-07T07:34:58.3730699Z * [new branch] gh/swolchok/828/base -> origin/gh/swolchok/828/base 2025-09-07T07:34:58.3730908Z * [new branch] gh/swolchok/828/head -> origin/gh/swolchok/828/head 2025-09-07T07:34:58.3731055Z * [new branch] gh/swolchok/828/orig -> origin/gh/swolchok/828/orig 2025-09-07T07:34:58.3731205Z * [new branch] gh/swolchok/829/base -> origin/gh/swolchok/829/base 2025-09-07T07:34:58.3731381Z * [new branch] gh/swolchok/829/head -> origin/gh/swolchok/829/head 2025-09-07T07:34:58.3731766Z * [new branch] gh/swolchok/829/orig -> origin/gh/swolchok/829/orig 2025-09-07T07:34:58.3736059Z * [new branch] gh/swolchok/830/base -> origin/gh/swolchok/830/base 2025-09-07T07:34:58.3736286Z * [new branch] gh/swolchok/830/head -> origin/gh/swolchok/830/head 2025-09-07T07:34:58.3736444Z * [new branch] gh/swolchok/830/orig -> origin/gh/swolchok/830/orig 2025-09-07T07:34:58.3736610Z * [new branch] gh/swolchok/831/base -> origin/gh/swolchok/831/base 2025-09-07T07:34:58.3736771Z * [new branch] gh/swolchok/831/head -> origin/gh/swolchok/831/head 2025-09-07T07:34:58.3736910Z * [new branch] gh/swolchok/831/orig -> origin/gh/swolchok/831/orig 2025-09-07T07:34:58.3737121Z * [new branch] gh/swolchok/832/base -> origin/gh/swolchok/832/base 2025-09-07T07:34:58.3737720Z * [new branch] gh/swolchok/832/head -> origin/gh/swolchok/832/head 2025-09-07T07:34:58.3738244Z * [new branch] gh/swolchok/832/orig -> origin/gh/swolchok/832/orig 2025-09-07T07:34:58.3742200Z * [new branch] gh/syed-ahmed/3/base -> origin/gh/syed-ahmed/3/base 2025-09-07T07:34:58.3742412Z * [new branch] gh/syed-ahmed/3/head -> origin/gh/syed-ahmed/3/head 2025-09-07T07:34:58.3742567Z * [new branch] gh/syed-ahmed/3/orig -> origin/gh/syed-ahmed/3/orig 2025-09-07T07:34:58.3742716Z * [new branch] gh/syed-ahmed/4/base -> origin/gh/syed-ahmed/4/base 2025-09-07T07:34:58.3742870Z * [new branch] gh/syed-ahmed/4/head -> origin/gh/syed-ahmed/4/head 2025-09-07T07:34:58.3743241Z * [new branch] gh/syed-ahmed/4/orig -> origin/gh/syed-ahmed/4/orig 2025-09-07T07:34:58.3743749Z * [new branch] gh/syed-ahmed/5/base -> origin/gh/syed-ahmed/5/base 2025-09-07T07:34:58.3746709Z * [new branch] gh/syed-ahmed/5/head -> origin/gh/syed-ahmed/5/head 2025-09-07T07:34:58.3746898Z * [new branch] gh/syed-ahmed/5/orig -> origin/gh/syed-ahmed/5/orig 2025-09-07T07:34:58.3747047Z * [new branch] gh/teja-rao/4/base -> origin/gh/teja-rao/4/base 2025-09-07T07:34:58.3747325Z * [new branch] gh/teja-rao/4/head -> origin/gh/teja-rao/4/head 2025-09-07T07:34:58.3749707Z * [new branch] gh/teja-rao/4/orig -> origin/gh/teja-rao/4/orig 2025-09-07T07:34:58.3750050Z * [new branch] gh/tianyu-l/2/base -> origin/gh/tianyu-l/2/base 2025-09-07T07:34:58.3750264Z * [new branch] gh/tianyu-l/2/head -> origin/gh/tianyu-l/2/head 2025-09-07T07:34:58.3750637Z * [new branch] gh/tianyu-l/2/orig -> origin/gh/tianyu-l/2/orig 2025-09-07T07:34:58.3754512Z * [new branch] gh/tianyu-l/3/base -> origin/gh/tianyu-l/3/base 2025-09-07T07:34:58.3754843Z * [new branch] gh/tianyu-l/3/head -> origin/gh/tianyu-l/3/head 2025-09-07T07:34:58.3755025Z * [new branch] gh/tianyu-l/3/orig -> origin/gh/tianyu-l/3/orig 2025-09-07T07:34:58.3755207Z * [new branch] gh/tianyu-l/4/base -> origin/gh/tianyu-l/4/base 2025-09-07T07:34:58.3755580Z * [new branch] gh/tianyu-l/4/head -> origin/gh/tianyu-l/4/head 2025-09-07T07:34:58.3755853Z * [new branch] gh/tianyu-l/4/orig -> origin/gh/tianyu-l/4/orig 2025-09-07T07:34:58.3756367Z * [new branch] gh/tugsbayasgalan/1/base -> origin/gh/tugsbayasgalan/1/base 2025-09-07T07:34:58.3757253Z * [new branch] gh/tugsbayasgalan/1/head -> origin/gh/tugsbayasgalan/1/head 2025-09-07T07:34:58.3757725Z * [new branch] gh/tugsbayasgalan/1/orig -> origin/gh/tugsbayasgalan/1/orig 2025-09-07T07:34:58.3760798Z * [new branch] gh/tugsbayasgalan/10/base -> origin/gh/tugsbayasgalan/10/base 2025-09-07T07:34:58.3761151Z * [new branch] gh/tugsbayasgalan/10/head -> origin/gh/tugsbayasgalan/10/head 2025-09-07T07:34:58.3761402Z * [new branch] gh/tugsbayasgalan/10/orig -> origin/gh/tugsbayasgalan/10/orig 2025-09-07T07:34:58.3761604Z * [new branch] gh/tugsbayasgalan/11/base -> origin/gh/tugsbayasgalan/11/base 2025-09-07T07:34:58.3761905Z * [new branch] gh/tugsbayasgalan/11/head -> origin/gh/tugsbayasgalan/11/head 2025-09-07T07:34:58.3763302Z * [new branch] gh/tugsbayasgalan/11/orig -> origin/gh/tugsbayasgalan/11/orig 2025-09-07T07:34:58.3764425Z * [new branch] gh/tugsbayasgalan/12/base -> origin/gh/tugsbayasgalan/12/base 2025-09-07T07:34:58.3765408Z * [new branch] gh/tugsbayasgalan/12/head -> origin/gh/tugsbayasgalan/12/head 2025-09-07T07:34:58.3765727Z * [new branch] gh/tugsbayasgalan/12/orig -> origin/gh/tugsbayasgalan/12/orig 2025-09-07T07:34:58.3767172Z * [new branch] gh/tugsbayasgalan/13/base -> origin/gh/tugsbayasgalan/13/base 2025-09-07T07:34:58.3767350Z * [new branch] gh/tugsbayasgalan/13/head -> origin/gh/tugsbayasgalan/13/head 2025-09-07T07:34:58.3770842Z * [new branch] gh/tugsbayasgalan/13/orig -> origin/gh/tugsbayasgalan/13/orig 2025-09-07T07:34:58.3771040Z * [new branch] gh/tugsbayasgalan/14/base -> origin/gh/tugsbayasgalan/14/base 2025-09-07T07:34:58.3771197Z * [new branch] gh/tugsbayasgalan/14/head -> origin/gh/tugsbayasgalan/14/head 2025-09-07T07:34:58.3771381Z * [new branch] gh/tugsbayasgalan/14/orig -> origin/gh/tugsbayasgalan/14/orig 2025-09-07T07:34:58.3771683Z * [new branch] gh/tugsbayasgalan/15/base -> origin/gh/tugsbayasgalan/15/base 2025-09-07T07:34:58.3772133Z * [new branch] gh/tugsbayasgalan/15/head -> origin/gh/tugsbayasgalan/15/head 2025-09-07T07:34:58.3772738Z * [new branch] gh/tugsbayasgalan/15/orig -> origin/gh/tugsbayasgalan/15/orig 2025-09-07T07:34:58.3777674Z * [new branch] gh/tugsbayasgalan/2/base -> origin/gh/tugsbayasgalan/2/base 2025-09-07T07:34:58.3778050Z * [new branch] gh/tugsbayasgalan/2/head -> origin/gh/tugsbayasgalan/2/head 2025-09-07T07:34:58.3778354Z * [new branch] gh/tugsbayasgalan/2/orig -> origin/gh/tugsbayasgalan/2/orig 2025-09-07T07:34:58.3778618Z * [new branch] gh/tugsbayasgalan/3/base -> origin/gh/tugsbayasgalan/3/base 2025-09-07T07:34:58.3778804Z * [new branch] gh/tugsbayasgalan/3/head -> origin/gh/tugsbayasgalan/3/head 2025-09-07T07:34:58.3779362Z * [new branch] gh/tugsbayasgalan/3/orig -> origin/gh/tugsbayasgalan/3/orig 2025-09-07T07:34:58.3779574Z * [new branch] gh/tugsbayasgalan/4/base -> origin/gh/tugsbayasgalan/4/base 2025-09-07T07:34:58.3779759Z * [new branch] gh/tugsbayasgalan/4/head -> origin/gh/tugsbayasgalan/4/head 2025-09-07T07:34:58.3780211Z * [new branch] gh/tugsbayasgalan/4/orig -> origin/gh/tugsbayasgalan/4/orig 2025-09-07T07:34:58.3784376Z * [new branch] gh/tugsbayasgalan/5/base -> origin/gh/tugsbayasgalan/5/base 2025-09-07T07:34:58.3784895Z * [new branch] gh/tugsbayasgalan/5/head -> origin/gh/tugsbayasgalan/5/head 2025-09-07T07:34:58.3785197Z * [new branch] gh/tugsbayasgalan/5/orig -> origin/gh/tugsbayasgalan/5/orig 2025-09-07T07:34:58.3785404Z * [new branch] gh/tugsbayasgalan/6/base -> origin/gh/tugsbayasgalan/6/base 2025-09-07T07:34:58.3785654Z * [new branch] gh/tugsbayasgalan/6/head -> origin/gh/tugsbayasgalan/6/head 2025-09-07T07:34:58.3785832Z * [new branch] gh/tugsbayasgalan/6/orig -> origin/gh/tugsbayasgalan/6/orig 2025-09-07T07:34:58.3786284Z * [new branch] gh/tugsbayasgalan/7/base -> origin/gh/tugsbayasgalan/7/base 2025-09-07T07:34:58.3791696Z * [new branch] gh/tugsbayasgalan/7/head -> origin/gh/tugsbayasgalan/7/head 2025-09-07T07:34:58.3791917Z * [new branch] gh/tugsbayasgalan/7/orig -> origin/gh/tugsbayasgalan/7/orig 2025-09-07T07:34:58.3792087Z * [new branch] gh/tugsbayasgalan/8/base -> origin/gh/tugsbayasgalan/8/base 2025-09-07T07:34:58.3792285Z * [new branch] gh/tugsbayasgalan/8/head -> origin/gh/tugsbayasgalan/8/head 2025-09-07T07:34:58.3792460Z * [new branch] gh/tugsbayasgalan/8/orig -> origin/gh/tugsbayasgalan/8/orig 2025-09-07T07:34:58.3792625Z * [new branch] gh/tugsbayasgalan/9/base -> origin/gh/tugsbayasgalan/9/base 2025-09-07T07:34:58.3792790Z * [new branch] gh/tugsbayasgalan/9/head -> origin/gh/tugsbayasgalan/9/head 2025-09-07T07:34:58.3793134Z * [new branch] gh/tugsbayasgalan/9/orig -> origin/gh/tugsbayasgalan/9/orig 2025-09-07T07:34:58.3793818Z * [new branch] gh/v0i0/1/base -> origin/gh/v0i0/1/base 2025-09-07T07:34:58.3794340Z * [new branch] gh/v0i0/1/head -> origin/gh/v0i0/1/head 2025-09-07T07:34:58.3798198Z * [new branch] gh/v0i0/1/orig -> origin/gh/v0i0/1/orig 2025-09-07T07:34:58.3798530Z * [new branch] gh/v0i0/4/base -> origin/gh/v0i0/4/base 2025-09-07T07:34:58.3798907Z * [new branch] gh/v0i0/4/head -> origin/gh/v0i0/4/head 2025-09-07T07:34:58.3799158Z * [new branch] gh/v0i0/4/orig -> origin/gh/v0i0/4/orig 2025-09-07T07:34:58.3799303Z * [new branch] gh/v0i0/6/base -> origin/gh/v0i0/6/base 2025-09-07T07:34:58.3799617Z * [new branch] gh/v0i0/6/head -> origin/gh/v0i0/6/head 2025-09-07T07:34:58.3800010Z * [new branch] gh/v0i0/6/orig -> origin/gh/v0i0/6/orig 2025-09-07T07:34:58.3800701Z * [new branch] gh/v0i0/7/base -> origin/gh/v0i0/7/base 2025-09-07T07:34:58.3801640Z * [new branch] gh/v0i0/7/head -> origin/gh/v0i0/7/head 2025-09-07T07:34:58.3802321Z * [new branch] gh/v0i0/7/orig -> origin/gh/v0i0/7/orig 2025-09-07T07:34:58.3802997Z * [new branch] gh/v0i0/8/base -> origin/gh/v0i0/8/base 2025-09-07T07:34:58.3803549Z * [new branch] gh/v0i0/8/head -> origin/gh/v0i0/8/head 2025-09-07T07:34:58.3804453Z * [new branch] gh/v0i0/8/orig -> origin/gh/v0i0/8/orig 2025-09-07T07:34:58.3805533Z * [new branch] gh/v0i0/9/base -> origin/gh/v0i0/9/base 2025-09-07T07:34:58.3805690Z * [new branch] gh/v0i0/9/head -> origin/gh/v0i0/9/head 2025-09-07T07:34:58.3807157Z * [new branch] gh/v0i0/9/orig -> origin/gh/v0i0/9/orig 2025-09-07T07:34:58.3807978Z * [new branch] gh/vkuzo/1/next -> origin/gh/vkuzo/1/next 2025-09-07T07:34:58.3809078Z * [new branch] gh/vkuzo/2/next -> origin/gh/vkuzo/2/next 2025-09-07T07:34:58.3809633Z * [new branch] gh/vkuzo/3/next -> origin/gh/vkuzo/3/next 2025-09-07T07:34:58.3811199Z * [new branch] gh/vkuzo/4/base -> origin/gh/vkuzo/4/base 2025-09-07T07:34:58.3812007Z * [new branch] gh/vkuzo/4/head -> origin/gh/vkuzo/4/head 2025-09-07T07:34:58.3812402Z * [new branch] gh/vkuzo/4/orig -> origin/gh/vkuzo/4/orig 2025-09-07T07:34:58.3814582Z * [new branch] gh/vkuzo/5/base -> origin/gh/vkuzo/5/base 2025-09-07T07:34:58.3814929Z * [new branch] gh/vkuzo/5/head -> origin/gh/vkuzo/5/head 2025-09-07T07:34:58.3816259Z * [new branch] gh/vkuzo/5/orig -> origin/gh/vkuzo/5/orig 2025-09-07T07:34:58.3816853Z * [new branch] gh/vkuzo/6/base -> origin/gh/vkuzo/6/base 2025-09-07T07:34:58.3817442Z * [new branch] gh/vkuzo/6/head -> origin/gh/vkuzo/6/head 2025-09-07T07:34:58.3818276Z * [new branch] gh/vkuzo/6/orig -> origin/gh/vkuzo/6/orig 2025-09-07T07:34:58.3819226Z * [new branch] gh/vkuzo/7/base -> origin/gh/vkuzo/7/base 2025-09-07T07:34:58.3819614Z * [new branch] gh/vkuzo/7/head -> origin/gh/vkuzo/7/head 2025-09-07T07:34:58.3820665Z * [new branch] gh/vkuzo/7/orig -> origin/gh/vkuzo/7/orig 2025-09-07T07:34:58.3821928Z * [new branch] gh/wconstab/419/base -> origin/gh/wconstab/419/base 2025-09-07T07:34:58.3822095Z * [new branch] gh/wconstab/419/head -> origin/gh/wconstab/419/head 2025-09-07T07:34:58.3824966Z * [new branch] gh/wconstab/419/orig -> origin/gh/wconstab/419/orig 2025-09-07T07:34:58.3825193Z * [new branch] gh/wconstab/424/base -> origin/gh/wconstab/424/base 2025-09-07T07:34:58.3825358Z * [new branch] gh/wconstab/424/head -> origin/gh/wconstab/424/head 2025-09-07T07:34:58.3825523Z * [new branch] gh/wconstab/424/orig -> origin/gh/wconstab/424/orig 2025-09-07T07:34:58.3826180Z * [new branch] gh/wconstab/435/base -> origin/gh/wconstab/435/base 2025-09-07T07:34:58.3827767Z * [new branch] gh/wconstab/435/head -> origin/gh/wconstab/435/head 2025-09-07T07:34:58.3827933Z * [new branch] gh/wconstab/435/orig -> origin/gh/wconstab/435/orig 2025-09-07T07:34:58.3828091Z * [new branch] gh/wconstab/438/base -> origin/gh/wconstab/438/base 2025-09-07T07:34:58.3829251Z * [new branch] gh/wconstab/438/head -> origin/gh/wconstab/438/head 2025-09-07T07:34:58.3829480Z * [new branch] gh/wconstab/438/orig -> origin/gh/wconstab/438/orig 2025-09-07T07:34:58.3830843Z * [new branch] gh/wconstab/440/base -> origin/gh/wconstab/440/base 2025-09-07T07:34:58.3831325Z * [new branch] gh/wconstab/440/head -> origin/gh/wconstab/440/head 2025-09-07T07:34:58.3832449Z * [new branch] gh/wconstab/440/orig -> origin/gh/wconstab/440/orig 2025-09-07T07:34:58.3833447Z * [new branch] gh/wconstab/441/base -> origin/gh/wconstab/441/base 2025-09-07T07:34:58.3833616Z * [new branch] gh/wconstab/441/head -> origin/gh/wconstab/441/head 2025-09-07T07:34:58.3835030Z * [new branch] gh/wconstab/441/orig -> origin/gh/wconstab/441/orig 2025-09-07T07:34:58.3835310Z * [new branch] gh/wconstab/442/base -> origin/gh/wconstab/442/base 2025-09-07T07:34:58.3836517Z * [new branch] gh/wconstab/442/head -> origin/gh/wconstab/442/head 2025-09-07T07:34:58.3836879Z * [new branch] gh/wconstab/442/orig -> origin/gh/wconstab/442/orig 2025-09-07T07:34:58.3838172Z * [new branch] gh/wconstab/443/base -> origin/gh/wconstab/443/base 2025-09-07T07:34:58.3838596Z * [new branch] gh/wconstab/443/head -> origin/gh/wconstab/443/head 2025-09-07T07:34:58.3839222Z * [new branch] gh/wconstab/443/orig -> origin/gh/wconstab/443/orig 2025-09-07T07:34:58.3840513Z * [new branch] gh/wconstab/444/base -> origin/gh/wconstab/444/base 2025-09-07T07:34:58.3840754Z * [new branch] gh/wconstab/444/head -> origin/gh/wconstab/444/head 2025-09-07T07:34:58.3841584Z * [new branch] gh/wconstab/444/orig -> origin/gh/wconstab/444/orig 2025-09-07T07:34:58.3842742Z * [new branch] gh/wconstab/445/base -> origin/gh/wconstab/445/base 2025-09-07T07:34:58.3842987Z * [new branch] gh/wconstab/445/head -> origin/gh/wconstab/445/head 2025-09-07T07:34:58.3843920Z * [new branch] gh/wconstab/445/orig -> origin/gh/wconstab/445/orig 2025-09-07T07:34:58.3845450Z * [new branch] gh/wconstab/446/base -> origin/gh/wconstab/446/base 2025-09-07T07:34:58.3850089Z * [new branch] gh/wconstab/446/head -> origin/gh/wconstab/446/head 2025-09-07T07:34:58.3850661Z * [new branch] gh/wconstab/446/orig -> origin/gh/wconstab/446/orig 2025-09-07T07:34:58.3858012Z * [new branch] gh/wconstab/447/base -> origin/gh/wconstab/447/base 2025-09-07T07:34:58.3858205Z * [new branch] gh/wconstab/447/head -> origin/gh/wconstab/447/head 2025-09-07T07:34:58.3858345Z * [new branch] gh/wconstab/447/orig -> origin/gh/wconstab/447/orig 2025-09-07T07:34:58.3858506Z * [new branch] gh/weifengpy/27/base -> origin/gh/weifengpy/27/base 2025-09-07T07:34:58.3858640Z * [new branch] gh/weifengpy/27/head -> origin/gh/weifengpy/27/head 2025-09-07T07:34:58.3858776Z * [new branch] gh/weifengpy/27/orig -> origin/gh/weifengpy/27/orig 2025-09-07T07:34:58.3863454Z * [new branch] gh/weifengpy/30/base -> origin/gh/weifengpy/30/base 2025-09-07T07:34:58.3863787Z * [new branch] gh/weifengpy/30/head -> origin/gh/weifengpy/30/head 2025-09-07T07:34:58.3863954Z * [new branch] gh/weifengpy/30/orig -> origin/gh/weifengpy/30/orig 2025-09-07T07:34:58.3864134Z * [new branch] gh/williamwen42/196/base -> origin/gh/williamwen42/196/base 2025-09-07T07:34:58.3864362Z * [new branch] gh/williamwen42/196/head -> origin/gh/williamwen42/196/head 2025-09-07T07:34:58.3870279Z * [new branch] gh/williamwen42/196/orig -> origin/gh/williamwen42/196/orig 2025-09-07T07:34:58.3872710Z * [new branch] gh/williamwen42/250/base -> origin/gh/williamwen42/250/base 2025-09-07T07:34:58.3878463Z * [new branch] gh/williamwen42/250/head -> origin/gh/williamwen42/250/head 2025-09-07T07:34:58.3881992Z * [new branch] gh/williamwen42/250/orig -> origin/gh/williamwen42/250/orig 2025-09-07T07:34:58.3882201Z * [new branch] gh/williamwen42/258/base -> origin/gh/williamwen42/258/base 2025-09-07T07:34:58.3882381Z * [new branch] gh/williamwen42/258/head -> origin/gh/williamwen42/258/head 2025-09-07T07:34:58.3882563Z * [new branch] gh/williamwen42/258/orig -> origin/gh/williamwen42/258/orig 2025-09-07T07:34:58.3882733Z * [new branch] gh/williamwen42/266/base -> origin/gh/williamwen42/266/base 2025-09-07T07:34:58.3882887Z * [new branch] gh/williamwen42/266/head -> origin/gh/williamwen42/266/head 2025-09-07T07:34:58.3883060Z * [new branch] gh/williamwen42/266/orig -> origin/gh/williamwen42/266/orig 2025-09-07T07:34:58.3883227Z * [new branch] gh/williamwen42/267/base -> origin/gh/williamwen42/267/base 2025-09-07T07:34:58.3883425Z * [new branch] gh/williamwen42/267/head -> origin/gh/williamwen42/267/head 2025-09-07T07:34:58.3883587Z * [new branch] gh/williamwen42/267/orig -> origin/gh/williamwen42/267/orig 2025-09-07T07:34:58.3883751Z * [new branch] gh/williamwen42/270/base -> origin/gh/williamwen42/270/base 2025-09-07T07:34:58.3883918Z * [new branch] gh/williamwen42/270/head -> origin/gh/williamwen42/270/head 2025-09-07T07:34:58.3884310Z * [new branch] gh/williamwen42/270/orig -> origin/gh/williamwen42/270/orig 2025-09-07T07:34:58.3884478Z * [new branch] gh/williamwen42/271/base -> origin/gh/williamwen42/271/base 2025-09-07T07:34:58.3884684Z * [new branch] gh/williamwen42/271/head -> origin/gh/williamwen42/271/head 2025-09-07T07:34:58.3884844Z * [new branch] gh/williamwen42/271/orig -> origin/gh/williamwen42/271/orig 2025-09-07T07:34:58.3885038Z * [new branch] gh/williamwen42/272/base -> origin/gh/williamwen42/272/base 2025-09-07T07:34:58.3885199Z * [new branch] gh/williamwen42/272/head -> origin/gh/williamwen42/272/head 2025-09-07T07:34:58.3885362Z * [new branch] gh/williamwen42/272/orig -> origin/gh/williamwen42/272/orig 2025-09-07T07:34:58.3885523Z * [new branch] gh/williamwen42/274/base -> origin/gh/williamwen42/274/base 2025-09-07T07:34:58.3885696Z * [new branch] gh/williamwen42/274/head -> origin/gh/williamwen42/274/head 2025-09-07T07:34:58.3885844Z * [new branch] gh/williamwen42/274/orig -> origin/gh/williamwen42/274/orig 2025-09-07T07:34:58.3886010Z * [new branch] gh/williamwen42/275/base -> origin/gh/williamwen42/275/base 2025-09-07T07:34:58.3886172Z * [new branch] gh/williamwen42/275/head -> origin/gh/williamwen42/275/head 2025-09-07T07:34:58.3886336Z * [new branch] gh/williamwen42/276/base -> origin/gh/williamwen42/276/base 2025-09-07T07:34:58.3886501Z * [new branch] gh/williamwen42/276/head -> origin/gh/williamwen42/276/head 2025-09-07T07:34:58.3886664Z * [new branch] gh/williamwen42/276/orig -> origin/gh/williamwen42/276/orig 2025-09-07T07:34:58.3887161Z * [new branch] gh/williamwen42/277/base -> origin/gh/williamwen42/277/base 2025-09-07T07:34:58.3887347Z * [new branch] gh/williamwen42/277/head -> origin/gh/williamwen42/277/head 2025-09-07T07:34:58.3887516Z * [new branch] gh/williamwen42/277/orig -> origin/gh/williamwen42/277/orig 2025-09-07T07:34:58.3887681Z * [new branch] gh/williamwen42/278/base -> origin/gh/williamwen42/278/base 2025-09-07T07:34:58.3887843Z * [new branch] gh/williamwen42/278/head -> origin/gh/williamwen42/278/head 2025-09-07T07:34:58.3888005Z * [new branch] gh/williamwen42/278/orig -> origin/gh/williamwen42/278/orig 2025-09-07T07:34:58.3888165Z * [new branch] gh/williamwen42/279/base -> origin/gh/williamwen42/279/base 2025-09-07T07:34:58.3888371Z * [new branch] gh/williamwen42/279/head -> origin/gh/williamwen42/279/head 2025-09-07T07:34:58.3888527Z * [new branch] gh/williamwen42/279/orig -> origin/gh/williamwen42/279/orig 2025-09-07T07:34:58.3891338Z * [new branch] gh/williamwen42/280/base -> origin/gh/williamwen42/280/base 2025-09-07T07:34:58.3891644Z * [new branch] gh/williamwen42/280/head -> origin/gh/williamwen42/280/head 2025-09-07T07:34:58.3895886Z * [new branch] gh/williamwen42/280/orig -> origin/gh/williamwen42/280/orig 2025-09-07T07:34:58.3899307Z * [new branch] gh/williamwen42/281/base -> origin/gh/williamwen42/281/base 2025-09-07T07:34:58.3899530Z * [new branch] gh/williamwen42/281/head -> origin/gh/williamwen42/281/head 2025-09-07T07:34:58.3899744Z * [new branch] gh/williamwen42/281/orig -> origin/gh/williamwen42/281/orig 2025-09-07T07:34:58.3899925Z * [new branch] gh/williamwen42/282/base -> origin/gh/williamwen42/282/base 2025-09-07T07:34:58.3900106Z * [new branch] gh/williamwen42/282/head -> origin/gh/williamwen42/282/head 2025-09-07T07:34:58.3900268Z * [new branch] gh/williamwen42/282/orig -> origin/gh/williamwen42/282/orig 2025-09-07T07:34:58.3900428Z * [new branch] gh/williamwen42/283/base -> origin/gh/williamwen42/283/base 2025-09-07T07:34:58.3900790Z * [new branch] gh/williamwen42/283/head -> origin/gh/williamwen42/283/head 2025-09-07T07:34:58.3900961Z * [new branch] gh/williamwen42/283/orig -> origin/gh/williamwen42/283/orig 2025-09-07T07:34:58.3901133Z * [new branch] gh/williamwen42/284/base -> origin/gh/williamwen42/284/base 2025-09-07T07:34:58.3901298Z * [new branch] gh/williamwen42/284/head -> origin/gh/williamwen42/284/head 2025-09-07T07:34:58.3901454Z * [new branch] gh/williamwen42/284/orig -> origin/gh/williamwen42/284/orig 2025-09-07T07:34:58.3906296Z * [new branch] gh/williamwen42/285/base -> origin/gh/williamwen42/285/base 2025-09-07T07:34:58.3908040Z * [new branch] gh/williamwen42/285/head -> origin/gh/williamwen42/285/head 2025-09-07T07:34:58.3908340Z * [new branch] gh/williamwen42/285/orig -> origin/gh/williamwen42/285/orig 2025-09-07T07:34:58.3914724Z * [new branch] gh/williamwen42/286/base -> origin/gh/williamwen42/286/base 2025-09-07T07:34:58.3916522Z * [new branch] gh/williamwen42/286/head -> origin/gh/williamwen42/286/head 2025-09-07T07:34:58.3916825Z * [new branch] gh/williamwen42/286/orig -> origin/gh/williamwen42/286/orig 2025-09-07T07:34:58.3917007Z * [new branch] gh/williamwen42/287/base -> origin/gh/williamwen42/287/base 2025-09-07T07:34:58.3917236Z * [new branch] gh/williamwen42/287/head -> origin/gh/williamwen42/287/head 2025-09-07T07:34:58.3917416Z * [new branch] gh/williamwen42/287/orig -> origin/gh/williamwen42/287/orig 2025-09-07T07:34:58.3917670Z * [new branch] gh/williamwen42/288/base -> origin/gh/williamwen42/288/base 2025-09-07T07:34:58.3917840Z * [new branch] gh/williamwen42/288/head -> origin/gh/williamwen42/288/head 2025-09-07T07:34:58.3918099Z * [new branch] gh/williamwen42/288/orig -> origin/gh/williamwen42/288/orig 2025-09-07T07:34:58.3918270Z * [new branch] gh/williamwen42/289/base -> origin/gh/williamwen42/289/base 2025-09-07T07:34:58.3918952Z * [new branch] gh/williamwen42/289/head -> origin/gh/williamwen42/289/head 2025-09-07T07:34:58.3919138Z * [new branch] gh/williamwen42/289/orig -> origin/gh/williamwen42/289/orig 2025-09-07T07:34:58.3919292Z * [new branch] gh/wychi/1/base -> origin/gh/wychi/1/base 2025-09-07T07:34:58.3919419Z * [new branch] gh/wychi/1/head -> origin/gh/wychi/1/head 2025-09-07T07:34:58.3919716Z * [new branch] gh/wychi/1/orig -> origin/gh/wychi/1/orig 2025-09-07T07:34:58.3919851Z * [new branch] gh/xmfan/169/base -> origin/gh/xmfan/169/base 2025-09-07T07:34:58.3919977Z * [new branch] gh/xmfan/169/head -> origin/gh/xmfan/169/head 2025-09-07T07:34:58.3920118Z * [new branch] gh/xmfan/170/base -> origin/gh/xmfan/170/base 2025-09-07T07:34:58.3920246Z * [new branch] gh/xmfan/170/head -> origin/gh/xmfan/170/head 2025-09-07T07:34:58.3920389Z * [new branch] gh/xmfan/18/base -> origin/gh/xmfan/18/base 2025-09-07T07:34:58.3920515Z * [new branch] gh/xmfan/18/head -> origin/gh/xmfan/18/head 2025-09-07T07:34:58.3920653Z * [new branch] gh/xmfan/229/base -> origin/gh/xmfan/229/base 2025-09-07T07:34:58.3920951Z * [new branch] gh/xmfan/229/head -> origin/gh/xmfan/229/head 2025-09-07T07:34:58.3921103Z * [new branch] gh/xmfan/229/orig -> origin/gh/xmfan/229/orig 2025-09-07T07:34:58.3921322Z * [new branch] gh/xmfan/237/base -> origin/gh/xmfan/237/base 2025-09-07T07:34:58.3923303Z * [new branch] gh/xmfan/237/head -> origin/gh/xmfan/237/head 2025-09-07T07:34:58.3923631Z * [new branch] gh/xmfan/237/orig -> origin/gh/xmfan/237/orig 2025-09-07T07:34:58.3923770Z * [new branch] gh/xmfan/244/base -> origin/gh/xmfan/244/base 2025-09-07T07:34:58.3924380Z * [new branch] gh/xmfan/244/head -> origin/gh/xmfan/244/head 2025-09-07T07:34:58.3924979Z * [new branch] gh/xmfan/244/orig -> origin/gh/xmfan/244/orig 2025-09-07T07:34:58.3926280Z * [new branch] gh/xmfan/246/base -> origin/gh/xmfan/246/base 2025-09-07T07:34:58.3926495Z * [new branch] gh/xmfan/246/head -> origin/gh/xmfan/246/head 2025-09-07T07:34:58.3927648Z * [new branch] gh/xmfan/246/orig -> origin/gh/xmfan/246/orig 2025-09-07T07:34:58.3928119Z * [new branch] gh/xmfan/253/base -> origin/gh/xmfan/253/base 2025-09-07T07:34:58.3930565Z * [new branch] gh/xmfan/253/head -> origin/gh/xmfan/253/head 2025-09-07T07:34:58.3930750Z * [new branch] gh/xmfan/253/orig -> origin/gh/xmfan/253/orig 2025-09-07T07:34:58.3930918Z * [new branch] gh/xmfan/254/base -> origin/gh/xmfan/254/base 2025-09-07T07:34:58.3931055Z * [new branch] gh/xmfan/254/head -> origin/gh/xmfan/254/head 2025-09-07T07:34:58.3931545Z * [new branch] gh/xmfan/254/orig -> origin/gh/xmfan/254/orig 2025-09-07T07:34:58.3935646Z * [new branch] gh/xmfan/260/base -> origin/gh/xmfan/260/base 2025-09-07T07:34:58.3935817Z * [new branch] gh/xmfan/260/head -> origin/gh/xmfan/260/head 2025-09-07T07:34:58.3935982Z * [new branch] gh/xmfan/260/orig -> origin/gh/xmfan/260/orig 2025-09-07T07:34:58.3936113Z * [new branch] gh/xmfan/262/base -> origin/gh/xmfan/262/base 2025-09-07T07:34:58.3936250Z * [new branch] gh/xmfan/262/head -> origin/gh/xmfan/262/head 2025-09-07T07:34:58.3936417Z * [new branch] gh/xmfan/262/orig -> origin/gh/xmfan/262/orig 2025-09-07T07:34:58.3940393Z * [new branch] gh/xmfan/263/base -> origin/gh/xmfan/263/base 2025-09-07T07:34:58.3940577Z * [new branch] gh/xmfan/263/head -> origin/gh/xmfan/263/head 2025-09-07T07:34:58.3940729Z * [new branch] gh/xmfan/263/orig -> origin/gh/xmfan/263/orig 2025-09-07T07:34:58.3940876Z * [new branch] gh/xmfan/264/base -> origin/gh/xmfan/264/base 2025-09-07T07:34:58.3941005Z * [new branch] gh/xmfan/264/head -> origin/gh/xmfan/264/head 2025-09-07T07:34:58.3941293Z * [new branch] gh/xmfan/264/orig -> origin/gh/xmfan/264/orig 2025-09-07T07:34:58.3941822Z * [new branch] gh/xmfan/274/base -> origin/gh/xmfan/274/base 2025-09-07T07:34:58.3942618Z * [new branch] gh/xmfan/274/head -> origin/gh/xmfan/274/head 2025-09-07T07:34:58.3943022Z * [new branch] gh/xmfan/274/orig -> origin/gh/xmfan/274/orig 2025-09-07T07:34:58.3946491Z * [new branch] gh/xmfan/276/base -> origin/gh/xmfan/276/base 2025-09-07T07:34:58.3946654Z * [new branch] gh/xmfan/276/head -> origin/gh/xmfan/276/head 2025-09-07T07:34:58.3946794Z * [new branch] gh/xmfan/276/orig -> origin/gh/xmfan/276/orig 2025-09-07T07:34:58.3946921Z * [new branch] gh/xmfan/277/base -> origin/gh/xmfan/277/base 2025-09-07T07:34:58.3947049Z * [new branch] gh/xmfan/277/head -> origin/gh/xmfan/277/head 2025-09-07T07:34:58.3947719Z * [new branch] gh/xmfan/277/orig -> origin/gh/xmfan/277/orig 2025-09-07T07:34:58.3952553Z * [new branch] gh/xmfan/278/base -> origin/gh/xmfan/278/base 2025-09-07T07:34:58.3952743Z * [new branch] gh/xmfan/278/head -> origin/gh/xmfan/278/head 2025-09-07T07:34:58.3953076Z * [new branch] gh/xmfan/278/orig -> origin/gh/xmfan/278/orig 2025-09-07T07:34:58.3953220Z * [new branch] gh/xmfan/279/base -> origin/gh/xmfan/279/base 2025-09-07T07:34:58.3953347Z * [new branch] gh/xmfan/279/head -> origin/gh/xmfan/279/head 2025-09-07T07:34:58.3953471Z * [new branch] gh/xmfan/279/orig -> origin/gh/xmfan/279/orig 2025-09-07T07:34:58.3953608Z * [new branch] gh/xmfan/280/base -> origin/gh/xmfan/280/base 2025-09-07T07:34:58.3955011Z * [new branch] gh/xmfan/280/head -> origin/gh/xmfan/280/head 2025-09-07T07:34:58.3955163Z * [new branch] gh/xmfan/280/orig -> origin/gh/xmfan/280/orig 2025-09-07T07:34:58.3955291Z * [new branch] gh/xmfan/281/base -> origin/gh/xmfan/281/base 2025-09-07T07:34:58.3956381Z * [new branch] gh/xmfan/281/head -> origin/gh/xmfan/281/head 2025-09-07T07:34:58.3956739Z * [new branch] gh/xmfan/281/orig -> origin/gh/xmfan/281/orig 2025-09-07T07:34:58.3959555Z * [new branch] gh/xmfan/282/base -> origin/gh/xmfan/282/base 2025-09-07T07:34:58.3959722Z * [new branch] gh/xmfan/282/head -> origin/gh/xmfan/282/head 2025-09-07T07:34:58.3959860Z * [new branch] gh/xmfan/283/base -> origin/gh/xmfan/283/base 2025-09-07T07:34:58.3959990Z * [new branch] gh/xmfan/283/head -> origin/gh/xmfan/283/head 2025-09-07T07:34:58.3960326Z * [new branch] gh/xmfan/283/orig -> origin/gh/xmfan/283/orig 2025-09-07T07:34:58.3961721Z * [new branch] gh/xuanzhang816/14/base -> origin/gh/xuanzhang816/14/base 2025-09-07T07:34:58.3965636Z * [new branch] gh/xuanzhang816/14/head -> origin/gh/xuanzhang816/14/head 2025-09-07T07:34:58.3966268Z * [new branch] gh/xuanzhang816/14/orig -> origin/gh/xuanzhang816/14/orig 2025-09-07T07:34:58.3967604Z * [new branch] gh/xuanzhang816/19/base -> origin/gh/xuanzhang816/19/base 2025-09-07T07:34:58.3967784Z * [new branch] gh/xuanzhang816/19/head -> origin/gh/xuanzhang816/19/head 2025-09-07T07:34:58.3970856Z * [new branch] gh/xuanzhang816/19/orig -> origin/gh/xuanzhang816/19/orig 2025-09-07T07:34:58.3971055Z * [new branch] gh/xuanzhang816/22/base -> origin/gh/xuanzhang816/22/base 2025-09-07T07:34:58.3971227Z * [new branch] gh/xuanzhang816/22/head -> origin/gh/xuanzhang816/22/head 2025-09-07T07:34:58.3971597Z * [new branch] gh/xuanzhang816/22/orig -> origin/gh/xuanzhang816/22/orig 2025-09-07T07:34:58.3974572Z * [new branch] gh/xuanzhang816/23/base -> origin/gh/xuanzhang816/23/base 2025-09-07T07:34:58.3974932Z * [new branch] gh/xuanzhang816/23/head -> origin/gh/xuanzhang816/23/head 2025-09-07T07:34:58.3975192Z * [new branch] gh/xuanzhang816/23/orig -> origin/gh/xuanzhang816/23/orig 2025-09-07T07:34:58.3975423Z * [new branch] gh/xuanzhang816/24/base -> origin/gh/xuanzhang816/24/base 2025-09-07T07:34:58.3975722Z * [new branch] gh/xuanzhang816/24/head -> origin/gh/xuanzhang816/24/head 2025-09-07T07:34:58.3976358Z * [new branch] gh/xuanzhang816/24/orig -> origin/gh/xuanzhang816/24/orig 2025-09-07T07:34:58.3976781Z * [new branch] gh/xuanzhang816/25/base -> origin/gh/xuanzhang816/25/base 2025-09-07T07:34:58.3980655Z * [new branch] gh/xuanzhang816/25/head -> origin/gh/xuanzhang816/25/head 2025-09-07T07:34:58.3981035Z * [new branch] gh/xuanzhang816/25/orig -> origin/gh/xuanzhang816/25/orig 2025-09-07T07:34:58.3981293Z * [new branch] gh/xuanzhang816/26/base -> origin/gh/xuanzhang816/26/base 2025-09-07T07:34:58.3981478Z * [new branch] gh/xuanzhang816/26/head -> origin/gh/xuanzhang816/26/head 2025-09-07T07:34:58.3981798Z * [new branch] gh/xuanzhang816/26/orig -> origin/gh/xuanzhang816/26/orig 2025-09-07T07:34:58.3982102Z * [new branch] gh/yanbing-j/11/base -> origin/gh/yanbing-j/11/base 2025-09-07T07:34:58.3982714Z * [new branch] gh/yanbing-j/11/head -> origin/gh/yanbing-j/11/head 2025-09-07T07:34:58.3982918Z * [new branch] gh/yanbing-j/11/orig -> origin/gh/yanbing-j/11/orig 2025-09-07T07:34:58.3987452Z * [new branch] gh/yanbing-j/12/base -> origin/gh/yanbing-j/12/base 2025-09-07T07:34:58.3987814Z * [new branch] gh/yanbing-j/12/head -> origin/gh/yanbing-j/12/head 2025-09-07T07:34:58.3987979Z * [new branch] gh/yanbing-j/12/orig -> origin/gh/yanbing-j/12/orig 2025-09-07T07:34:58.3988118Z * [new branch] gh/yanbing-j/13/base -> origin/gh/yanbing-j/13/base 2025-09-07T07:34:58.3988394Z * [new branch] gh/yanbing-j/13/head -> origin/gh/yanbing-j/13/head 2025-09-07T07:34:58.3988533Z * [new branch] gh/yanbing-j/13/orig -> origin/gh/yanbing-j/13/orig 2025-09-07T07:34:58.3989206Z * [new branch] gh/yanbing-j/14/base -> origin/gh/yanbing-j/14/base 2025-09-07T07:34:58.3989394Z * [new branch] gh/yanbing-j/14/head -> origin/gh/yanbing-j/14/head 2025-09-07T07:34:58.3989553Z * [new branch] gh/yanbing-j/14/orig -> origin/gh/yanbing-j/14/orig 2025-09-07T07:34:58.3990419Z * [new branch] gh/yanbing-j/15/base -> origin/gh/yanbing-j/15/base 2025-09-07T07:34:58.3990815Z * [new branch] gh/yanbing-j/15/head -> origin/gh/yanbing-j/15/head 2025-09-07T07:34:58.3993475Z * [new branch] gh/yanbing-j/15/orig -> origin/gh/yanbing-j/15/orig 2025-09-07T07:34:58.3993804Z * [new branch] gh/yanbing-j/18/base -> origin/gh/yanbing-j/18/base 2025-09-07T07:34:58.3994012Z * [new branch] gh/yanbing-j/18/head -> origin/gh/yanbing-j/18/head 2025-09-07T07:34:58.3994273Z * [new branch] gh/yanbing-j/18/orig -> origin/gh/yanbing-j/18/orig 2025-09-07T07:34:58.3994519Z * [new branch] gh/yanbing-j/19/base -> origin/gh/yanbing-j/19/base 2025-09-07T07:34:58.3995341Z * [new branch] gh/yanbing-j/19/head -> origin/gh/yanbing-j/19/head 2025-09-07T07:34:58.3995863Z * [new branch] gh/yanbing-j/19/orig -> origin/gh/yanbing-j/19/orig 2025-09-07T07:34:58.3998499Z * [new branch] gh/yanbing-j/20/base -> origin/gh/yanbing-j/20/base 2025-09-07T07:34:58.3998996Z * [new branch] gh/yanbing-j/20/head -> origin/gh/yanbing-j/20/head 2025-09-07T07:34:58.3999236Z * [new branch] gh/yanbing-j/20/orig -> origin/gh/yanbing-j/20/orig 2025-09-07T07:34:58.3999393Z * [new branch] gh/yanbing-j/21/base -> origin/gh/yanbing-j/21/base 2025-09-07T07:34:58.3999802Z * [new branch] gh/yanbing-j/21/head -> origin/gh/yanbing-j/21/head 2025-09-07T07:34:58.4001089Z * [new branch] gh/yanbing-j/22/base -> origin/gh/yanbing-j/22/base 2025-09-07T07:34:58.4001742Z * [new branch] gh/yanbing-j/22/head -> origin/gh/yanbing-j/22/head 2025-09-07T07:34:58.4002184Z * [new branch] gh/yanbing-j/22/orig -> origin/gh/yanbing-j/22/orig 2025-09-07T07:34:58.4003320Z * [new branch] gh/yanbing-j/23/base -> origin/gh/yanbing-j/23/base 2025-09-07T07:34:58.4003585Z * [new branch] gh/yanbing-j/23/head -> origin/gh/yanbing-j/23/head 2025-09-07T07:34:58.4004462Z * [new branch] gh/yanbing-j/23/orig -> origin/gh/yanbing-j/23/orig 2025-09-07T07:34:58.4005439Z * [new branch] gh/yanbing-j/24/base -> origin/gh/yanbing-j/24/base 2025-09-07T07:34:58.4006420Z * [new branch] gh/yanbing-j/24/head -> origin/gh/yanbing-j/24/head 2025-09-07T07:34:58.4006907Z * [new branch] gh/yanbing-j/24/orig -> origin/gh/yanbing-j/24/orig 2025-09-07T07:34:58.4012000Z * [new branch] gh/yanbing-j/25/base -> origin/gh/yanbing-j/25/base 2025-09-07T07:34:58.4012190Z * [new branch] gh/yanbing-j/25/head -> origin/gh/yanbing-j/25/head 2025-09-07T07:34:58.4012340Z * [new branch] gh/yanbing-j/25/orig -> origin/gh/yanbing-j/25/orig 2025-09-07T07:34:58.4012491Z * [new branch] gh/yanbing-j/26/base -> origin/gh/yanbing-j/26/base 2025-09-07T07:34:58.4012665Z * [new branch] gh/yanbing-j/26/head -> origin/gh/yanbing-j/26/head 2025-09-07T07:34:58.4012799Z * [new branch] gh/yanbing-j/26/orig -> origin/gh/yanbing-j/26/orig 2025-09-07T07:34:58.4012971Z * [new branch] gh/yanbing-j/36/base -> origin/gh/yanbing-j/36/base 2025-09-07T07:34:58.4013216Z * [new branch] gh/yanbing-j/36/head -> origin/gh/yanbing-j/36/head 2025-09-07T07:34:58.4017521Z * [new branch] gh/yanbing-j/36/orig -> origin/gh/yanbing-j/36/orig 2025-09-07T07:34:58.4017854Z * [new branch] gh/yanbing-j/37/base -> origin/gh/yanbing-j/37/base 2025-09-07T07:34:58.4018024Z * [new branch] gh/yanbing-j/37/head -> origin/gh/yanbing-j/37/head 2025-09-07T07:34:58.4018242Z * [new branch] gh/yanbing-j/37/orig -> origin/gh/yanbing-j/37/orig 2025-09-07T07:34:58.4018406Z * [new branch] gh/yangw-dev/12/base -> origin/gh/yangw-dev/12/base 2025-09-07T07:34:58.4018639Z * [new branch] gh/yangw-dev/12/head -> origin/gh/yangw-dev/12/head 2025-09-07T07:34:58.4019297Z * [new branch] gh/yangw-dev/12/orig -> origin/gh/yangw-dev/12/orig 2025-09-07T07:34:58.4019477Z * [new branch] gh/yangw-dev/13/base -> origin/gh/yangw-dev/13/base 2025-09-07T07:34:58.4019937Z * [new branch] gh/yangw-dev/13/head -> origin/gh/yangw-dev/13/head 2025-09-07T07:34:58.4020460Z * [new branch] gh/yangw-dev/13/orig -> origin/gh/yangw-dev/13/orig 2025-09-07T07:34:58.4022552Z * [new branch] gh/yangw-dev/14/base -> origin/gh/yangw-dev/14/base 2025-09-07T07:34:58.4022897Z * [new branch] gh/yangw-dev/14/head -> origin/gh/yangw-dev/14/head 2025-09-07T07:34:58.4023070Z * [new branch] gh/yangw-dev/14/orig -> origin/gh/yangw-dev/14/orig 2025-09-07T07:34:58.4024904Z * [new branch] gh/yangw-dev/15/base -> origin/gh/yangw-dev/15/base 2025-09-07T07:34:58.4025403Z * [new branch] gh/yangw-dev/15/head -> origin/gh/yangw-dev/15/head 2025-09-07T07:34:58.4025587Z * [new branch] gh/yangw-dev/15/orig -> origin/gh/yangw-dev/15/orig 2025-09-07T07:34:58.4025739Z * [new branch] gh/yangw-dev/16/base -> origin/gh/yangw-dev/16/base 2025-09-07T07:34:58.4027050Z * [new branch] gh/yangw-dev/16/head -> origin/gh/yangw-dev/16/head 2025-09-07T07:34:58.4027264Z * [new branch] gh/yangw-dev/16/orig -> origin/gh/yangw-dev/16/orig 2025-09-07T07:34:58.4029818Z * [new branch] gh/yangw-dev/17/base -> origin/gh/yangw-dev/17/base 2025-09-07T07:34:58.4030148Z * [new branch] gh/yangw-dev/17/head -> origin/gh/yangw-dev/17/head 2025-09-07T07:34:58.4030349Z * [new branch] gh/yangw-dev/17/orig -> origin/gh/yangw-dev/17/orig 2025-09-07T07:34:58.4030510Z * [new branch] gh/yangw-dev/18/base -> origin/gh/yangw-dev/18/base 2025-09-07T07:34:58.4030679Z * [new branch] gh/yangw-dev/18/head -> origin/gh/yangw-dev/18/head 2025-09-07T07:34:58.4032230Z * [new branch] gh/yangw-dev/18/orig -> origin/gh/yangw-dev/18/orig 2025-09-07T07:34:58.4032556Z * [new branch] gh/yangw-dev/19/base -> origin/gh/yangw-dev/19/base 2025-09-07T07:34:58.4032864Z * [new branch] gh/yangw-dev/19/head -> origin/gh/yangw-dev/19/head 2025-09-07T07:34:58.4033261Z * [new branch] gh/yangw-dev/19/orig -> origin/gh/yangw-dev/19/orig 2025-09-07T07:34:58.4035738Z * [new branch] gh/yangw-dev/20/base -> origin/gh/yangw-dev/20/base 2025-09-07T07:34:58.4036068Z * [new branch] gh/yangw-dev/20/head -> origin/gh/yangw-dev/20/head 2025-09-07T07:34:58.4036245Z * [new branch] gh/yangw-dev/20/orig -> origin/gh/yangw-dev/20/orig 2025-09-07T07:34:58.4036479Z * [new branch] gh/yangw-dev/21/base -> origin/gh/yangw-dev/21/base 2025-09-07T07:34:58.4036843Z * [new branch] gh/yangw-dev/21/head -> origin/gh/yangw-dev/21/head 2025-09-07T07:34:58.4037748Z * [new branch] gh/yangw-dev/21/orig -> origin/gh/yangw-dev/21/orig 2025-09-07T07:34:58.4041985Z * [new branch] gh/yangw-dev/22/base -> origin/gh/yangw-dev/22/base 2025-09-07T07:34:58.4042355Z * [new branch] gh/yangw-dev/22/head -> origin/gh/yangw-dev/22/head 2025-09-07T07:34:58.4042599Z * [new branch] gh/yangw-dev/22/orig -> origin/gh/yangw-dev/22/orig 2025-09-07T07:34:58.4042830Z * [new branch] gh/yangw-dev/23/base -> origin/gh/yangw-dev/23/base 2025-09-07T07:34:58.4043071Z * [new branch] gh/yangw-dev/23/head -> origin/gh/yangw-dev/23/head 2025-09-07T07:34:58.4043259Z * [new branch] gh/yangw-dev/23/orig -> origin/gh/yangw-dev/23/orig 2025-09-07T07:34:58.4043470Z * [new branch] gh/yangw-dev/24/base -> origin/gh/yangw-dev/24/base 2025-09-07T07:34:58.4044101Z * [new branch] gh/yangw-dev/24/head -> origin/gh/yangw-dev/24/head 2025-09-07T07:34:58.4044291Z * [new branch] gh/yangw-dev/24/orig -> origin/gh/yangw-dev/24/orig 2025-09-07T07:34:58.4045423Z * [new branch] gh/yangw-dev/25/base -> origin/gh/yangw-dev/25/base 2025-09-07T07:34:58.4045872Z * [new branch] gh/yangw-dev/25/head -> origin/gh/yangw-dev/25/head 2025-09-07T07:34:58.4046866Z * [new branch] gh/yangw-dev/25/orig -> origin/gh/yangw-dev/25/orig 2025-09-07T07:34:58.4052034Z * [new branch] gh/yangw-dev/26/base -> origin/gh/yangw-dev/26/base 2025-09-07T07:34:58.4052364Z * [new branch] gh/yangw-dev/26/head -> origin/gh/yangw-dev/26/head 2025-09-07T07:34:58.4052543Z * [new branch] gh/yangw-dev/26/orig -> origin/gh/yangw-dev/26/orig 2025-09-07T07:34:58.4053011Z * [new branch] gh/yangw-dev/27/base -> origin/gh/yangw-dev/27/base 2025-09-07T07:34:58.4053159Z * [new branch] gh/yangw-dev/27/head -> origin/gh/yangw-dev/27/head 2025-09-07T07:34:58.4053310Z * [new branch] gh/yangw-dev/27/orig -> origin/gh/yangw-dev/27/orig 2025-09-07T07:34:58.4053827Z * [new branch] gh/ydwu4/233/base -> origin/gh/ydwu4/233/base 2025-09-07T07:34:58.4054007Z * [new branch] gh/ydwu4/233/head -> origin/gh/ydwu4/233/head 2025-09-07T07:34:58.4054335Z * [new branch] gh/ydwu4/233/orig -> origin/gh/ydwu4/233/orig 2025-09-07T07:34:58.4058679Z * [new branch] gh/ydwu4/246/base -> origin/gh/ydwu4/246/base 2025-09-07T07:34:58.4058986Z * [new branch] gh/ydwu4/246/head -> origin/gh/ydwu4/246/head 2025-09-07T07:34:58.4059216Z * [new branch] gh/ydwu4/246/orig -> origin/gh/ydwu4/246/orig 2025-09-07T07:34:58.4059509Z * [new branch] gh/ydwu4/253/base -> origin/gh/ydwu4/253/base 2025-09-07T07:34:58.4059658Z * [new branch] gh/ydwu4/253/head -> origin/gh/ydwu4/253/head 2025-09-07T07:34:58.4059875Z * [new branch] gh/ydwu4/253/orig -> origin/gh/ydwu4/253/orig 2025-09-07T07:34:58.4060546Z * [new branch] gh/ydwu4/255/base -> origin/gh/ydwu4/255/base 2025-09-07T07:34:58.4061113Z * [new branch] gh/ydwu4/255/head -> origin/gh/ydwu4/255/head 2025-09-07T07:34:58.4061555Z * [new branch] gh/ydwu4/255/orig -> origin/gh/ydwu4/255/orig 2025-09-07T07:34:58.4064050Z * [new branch] gh/ydwu4/259/base -> origin/gh/ydwu4/259/base 2025-09-07T07:34:58.4064408Z * [new branch] gh/ydwu4/259/head -> origin/gh/ydwu4/259/head 2025-09-07T07:34:58.4064561Z * [new branch] gh/ydwu4/259/orig -> origin/gh/ydwu4/259/orig 2025-09-07T07:34:58.4066601Z * [new branch] gh/ydwu4/262/base -> origin/gh/ydwu4/262/base 2025-09-07T07:34:58.4066925Z * [new branch] gh/ydwu4/262/head -> origin/gh/ydwu4/262/head 2025-09-07T07:34:58.4067085Z * [new branch] gh/ydwu4/262/orig -> origin/gh/ydwu4/262/orig 2025-09-07T07:34:58.4067289Z * [new branch] gh/ydwu4/263/base -> origin/gh/ydwu4/263/base 2025-09-07T07:34:58.4067449Z * [new branch] gh/ydwu4/263/head -> origin/gh/ydwu4/263/head 2025-09-07T07:34:58.4069481Z * [new branch] gh/ydwu4/263/orig -> origin/gh/ydwu4/263/orig 2025-09-07T07:34:58.4069811Z * [new branch] gh/ydwu4/269/base -> origin/gh/ydwu4/269/base 2025-09-07T07:34:58.4069961Z * [new branch] gh/ydwu4/269/head -> origin/gh/ydwu4/269/head 2025-09-07T07:34:58.4070426Z * [new branch] gh/ydwu4/269/orig -> origin/gh/ydwu4/269/orig 2025-09-07T07:34:58.4072505Z * [new branch] gh/ydwu4/270/base -> origin/gh/ydwu4/270/base 2025-09-07T07:34:58.4072676Z * [new branch] gh/ydwu4/270/head -> origin/gh/ydwu4/270/head 2025-09-07T07:34:58.4072819Z * [new branch] gh/ydwu4/270/orig -> origin/gh/ydwu4/270/orig 2025-09-07T07:34:58.4077394Z * [new branch] gh/ydwu4/272/base -> origin/gh/ydwu4/272/base 2025-09-07T07:34:58.4077585Z * [new branch] gh/ydwu4/272/head -> origin/gh/ydwu4/272/head 2025-09-07T07:34:58.4077721Z * [new branch] gh/ydwu4/272/orig -> origin/gh/ydwu4/272/orig 2025-09-07T07:34:58.4077859Z * [new branch] gh/ydwu4/275/base -> origin/gh/ydwu4/275/base 2025-09-07T07:34:58.4077987Z * [new branch] gh/ydwu4/275/head -> origin/gh/ydwu4/275/head 2025-09-07T07:34:58.4078115Z * [new branch] gh/ydwu4/275/orig -> origin/gh/ydwu4/275/orig 2025-09-07T07:34:58.4078410Z * [new branch] gh/ydwu4/276/base -> origin/gh/ydwu4/276/base 2025-09-07T07:34:58.4078543Z * [new branch] gh/ydwu4/276/head -> origin/gh/ydwu4/276/head 2025-09-07T07:34:58.4079819Z * [new branch] gh/ydwu4/276/orig -> origin/gh/ydwu4/276/orig 2025-09-07T07:34:58.4080807Z * [new branch] gh/ydwu4/279/base -> origin/gh/ydwu4/279/base 2025-09-07T07:34:58.4081176Z * [new branch] gh/ydwu4/279/head -> origin/gh/ydwu4/279/head 2025-09-07T07:34:58.4082150Z * [new branch] gh/ydwu4/279/orig -> origin/gh/ydwu4/279/orig 2025-09-07T07:34:58.4083257Z * [new branch] gh/ydwu4/283/base -> origin/gh/ydwu4/283/base 2025-09-07T07:34:58.4083531Z * [new branch] gh/ydwu4/283/head -> origin/gh/ydwu4/283/head 2025-09-07T07:34:58.4084545Z * [new branch] gh/ydwu4/283/orig -> origin/gh/ydwu4/283/orig 2025-09-07T07:34:58.4085157Z * [new branch] gh/ydwu4/289/base -> origin/gh/ydwu4/289/base 2025-09-07T07:34:58.4086065Z * [new branch] gh/ydwu4/289/head -> origin/gh/ydwu4/289/head 2025-09-07T07:34:58.4087377Z * [new branch] gh/ydwu4/289/orig -> origin/gh/ydwu4/289/orig 2025-09-07T07:34:58.4088034Z * [new branch] gh/ydwu4/290/base -> origin/gh/ydwu4/290/base 2025-09-07T07:34:58.4089347Z * [new branch] gh/ydwu4/290/head -> origin/gh/ydwu4/290/head 2025-09-07T07:34:58.4089824Z * [new branch] gh/ydwu4/290/orig -> origin/gh/ydwu4/290/orig 2025-09-07T07:34:58.4091333Z * [new branch] gh/ydwu4/291/base -> origin/gh/ydwu4/291/base 2025-09-07T07:34:58.4091674Z * [new branch] gh/ydwu4/291/head -> origin/gh/ydwu4/291/head 2025-09-07T07:34:58.4094251Z * [new branch] gh/ydwu4/291/orig -> origin/gh/ydwu4/291/orig 2025-09-07T07:34:58.4094464Z * [new branch] gh/ydwu4/292/base -> origin/gh/ydwu4/292/base 2025-09-07T07:34:58.4094616Z * [new branch] gh/ydwu4/292/head -> origin/gh/ydwu4/292/head 2025-09-07T07:34:58.4094845Z * [new branch] gh/ydwu4/292/orig -> origin/gh/ydwu4/292/orig 2025-09-07T07:34:58.4099032Z * [new branch] gh/ydwu4/293/base -> origin/gh/ydwu4/293/base 2025-09-07T07:34:58.4099372Z * [new branch] gh/ydwu4/293/head -> origin/gh/ydwu4/293/head 2025-09-07T07:34:58.4099623Z * [new branch] gh/ydwu4/293/orig -> origin/gh/ydwu4/293/orig 2025-09-07T07:34:58.4099868Z * [new branch] gh/ydwu4/294/base -> origin/gh/ydwu4/294/base 2025-09-07T07:34:58.4100083Z * [new branch] gh/ydwu4/294/head -> origin/gh/ydwu4/294/head 2025-09-07T07:34:58.4100231Z * [new branch] gh/ydwu4/294/orig -> origin/gh/ydwu4/294/orig 2025-09-07T07:34:58.4100529Z * [new branch] gh/ydwu4/295/base -> origin/gh/ydwu4/295/base 2025-09-07T07:34:58.4102569Z * [new branch] gh/ydwu4/295/head -> origin/gh/ydwu4/295/head 2025-09-07T07:34:58.4102893Z * [new branch] gh/ydwu4/295/orig -> origin/gh/ydwu4/295/orig 2025-09-07T07:34:58.4103102Z * [new branch] gh/ydwu4/296/base -> origin/gh/ydwu4/296/base 2025-09-07T07:34:58.4103346Z * [new branch] gh/ydwu4/296/head -> origin/gh/ydwu4/296/head 2025-09-07T07:34:58.4103883Z * [new branch] gh/ydwu4/296/orig -> origin/gh/ydwu4/296/orig 2025-09-07T07:34:58.4108074Z * [new branch] gh/ydwu4/300/base -> origin/gh/ydwu4/300/base 2025-09-07T07:34:58.4108403Z * [new branch] gh/ydwu4/300/head -> origin/gh/ydwu4/300/head 2025-09-07T07:34:58.4113522Z * [new branch] gh/ydwu4/300/orig -> origin/gh/ydwu4/300/orig 2025-09-07T07:34:58.4114022Z * [new branch] gh/ydwu4/301/base -> origin/gh/ydwu4/301/base 2025-09-07T07:34:58.4114194Z * [new branch] gh/ydwu4/301/head -> origin/gh/ydwu4/301/head 2025-09-07T07:34:58.4114324Z * [new branch] gh/ydwu4/301/orig -> origin/gh/ydwu4/301/orig 2025-09-07T07:34:58.4114464Z * [new branch] gh/ydwu4/302/base -> origin/gh/ydwu4/302/base 2025-09-07T07:34:58.4114592Z * [new branch] gh/ydwu4/302/head -> origin/gh/ydwu4/302/head 2025-09-07T07:34:58.4114852Z * [new branch] gh/ydwu4/302/orig -> origin/gh/ydwu4/302/orig 2025-09-07T07:34:58.4114987Z * [new branch] gh/ydwu4/303/base -> origin/gh/ydwu4/303/base 2025-09-07T07:34:58.4115121Z * [new branch] gh/ydwu4/303/head -> origin/gh/ydwu4/303/head 2025-09-07T07:34:58.4115337Z * [new branch] gh/ydwu4/303/orig -> origin/gh/ydwu4/303/orig 2025-09-07T07:34:58.4120203Z * [new branch] gh/ydwu4/304/base -> origin/gh/ydwu4/304/base 2025-09-07T07:34:58.4125379Z * [new branch] gh/ydwu4/304/head -> origin/gh/ydwu4/304/head 2025-09-07T07:34:58.4125573Z * [new branch] gh/ydwu4/304/orig -> origin/gh/ydwu4/304/orig 2025-09-07T07:34:58.4125859Z * [new branch] gh/ydwu4/305/base -> origin/gh/ydwu4/305/base 2025-09-07T07:34:58.4126000Z * [new branch] gh/ydwu4/305/head -> origin/gh/ydwu4/305/head 2025-09-07T07:34:58.4126149Z * [new branch] gh/ydwu4/305/orig -> origin/gh/ydwu4/305/orig 2025-09-07T07:34:58.4126289Z * [new branch] gh/ydwu4/306/base -> origin/gh/ydwu4/306/base 2025-09-07T07:34:58.4126432Z * [new branch] gh/ydwu4/306/head -> origin/gh/ydwu4/306/head 2025-09-07T07:34:58.4126565Z * [new branch] gh/ydwu4/306/orig -> origin/gh/ydwu4/306/orig 2025-09-07T07:34:58.4126908Z * [new branch] gh/ydwu4/307/base -> origin/gh/ydwu4/307/base 2025-09-07T07:34:58.4127056Z * [new branch] gh/ydwu4/307/head -> origin/gh/ydwu4/307/head 2025-09-07T07:34:58.4127189Z * [new branch] gh/ydwu4/307/orig -> origin/gh/ydwu4/307/orig 2025-09-07T07:34:58.4127337Z * [new branch] gh/ydwu4/308/base -> origin/gh/ydwu4/308/base 2025-09-07T07:34:58.4127469Z * [new branch] gh/ydwu4/308/head -> origin/gh/ydwu4/308/head 2025-09-07T07:34:58.4127610Z * [new branch] gh/ydwu4/308/orig -> origin/gh/ydwu4/308/orig 2025-09-07T07:34:58.4127741Z * [new branch] gh/ydwu4/309/base -> origin/gh/ydwu4/309/base 2025-09-07T07:34:58.4127888Z * [new branch] gh/ydwu4/309/head -> origin/gh/ydwu4/309/head 2025-09-07T07:34:58.4128021Z * [new branch] gh/ydwu4/309/orig -> origin/gh/ydwu4/309/orig 2025-09-07T07:34:58.4140326Z * [new branch] gh/ydwu4/310/base -> origin/gh/ydwu4/310/base 2025-09-07T07:34:58.4140663Z * [new branch] gh/ydwu4/310/head -> origin/gh/ydwu4/310/head 2025-09-07T07:34:58.4140842Z * [new branch] gh/ydwu4/310/orig -> origin/gh/ydwu4/310/orig 2025-09-07T07:34:58.4141006Z * [new branch] gh/ydwu4/311/base -> origin/gh/ydwu4/311/base 2025-09-07T07:34:58.4141264Z * [new branch] gh/ydwu4/311/head -> origin/gh/ydwu4/311/head 2025-09-07T07:34:58.4141394Z * [new branch] gh/ydwu4/311/orig -> origin/gh/ydwu4/311/orig 2025-09-07T07:34:58.4141655Z * [new branch] gh/ydwu4/312/base -> origin/gh/ydwu4/312/base 2025-09-07T07:34:58.4142241Z * [new branch] gh/ydwu4/312/head -> origin/gh/ydwu4/312/head 2025-09-07T07:34:58.4142432Z * [new branch] gh/ydwu4/312/orig -> origin/gh/ydwu4/312/orig 2025-09-07T07:34:58.4142718Z * [new branch] gh/ydwu4/313/base -> origin/gh/ydwu4/313/base 2025-09-07T07:34:58.4142859Z * [new branch] gh/ydwu4/313/head -> origin/gh/ydwu4/313/head 2025-09-07T07:34:58.4142986Z * [new branch] gh/ydwu4/313/orig -> origin/gh/ydwu4/313/orig 2025-09-07T07:34:58.4143125Z * [new branch] gh/ydwu4/314/base -> origin/gh/ydwu4/314/base 2025-09-07T07:34:58.4143255Z * [new branch] gh/ydwu4/314/head -> origin/gh/ydwu4/314/head 2025-09-07T07:34:58.4143382Z * [new branch] gh/ydwu4/314/orig -> origin/gh/ydwu4/314/orig 2025-09-07T07:34:58.4143517Z * [new branch] gh/ydwu4/315/base -> origin/gh/ydwu4/315/base 2025-09-07T07:34:58.4143642Z * [new branch] gh/ydwu4/315/head -> origin/gh/ydwu4/315/head 2025-09-07T07:34:58.4143773Z * [new branch] gh/ydwu4/315/orig -> origin/gh/ydwu4/315/orig 2025-09-07T07:34:58.4144063Z * [new branch] gh/ydwu4/316/base -> origin/gh/ydwu4/316/base 2025-09-07T07:34:58.4144218Z * [new branch] gh/ydwu4/316/head -> origin/gh/ydwu4/316/head 2025-09-07T07:34:58.4144342Z * [new branch] gh/ydwu4/316/orig -> origin/gh/ydwu4/316/orig 2025-09-07T07:34:58.4144517Z * [new branch] gh/ydwu4/317/base -> origin/gh/ydwu4/317/base 2025-09-07T07:34:58.4144778Z * [new branch] gh/ydwu4/317/head -> origin/gh/ydwu4/317/head 2025-09-07T07:34:58.4144914Z * [new branch] gh/ydwu4/317/orig -> origin/gh/ydwu4/317/orig 2025-09-07T07:34:58.4150693Z * [new branch] gh/ydwu4/318/base -> origin/gh/ydwu4/318/base 2025-09-07T07:34:58.4151018Z * [new branch] gh/ydwu4/318/head -> origin/gh/ydwu4/318/head 2025-09-07T07:34:58.4151185Z * [new branch] gh/ydwu4/318/orig -> origin/gh/ydwu4/318/orig 2025-09-07T07:34:58.4151419Z * [new branch] gh/ydwu4/319/base -> origin/gh/ydwu4/319/base 2025-09-07T07:34:58.4151573Z * [new branch] gh/ydwu4/319/head -> origin/gh/ydwu4/319/head 2025-09-07T07:34:58.4154420Z * [new branch] gh/ydwu4/319/orig -> origin/gh/ydwu4/319/orig 2025-09-07T07:34:58.4154596Z * [new branch] gh/ydwu4/320/base -> origin/gh/ydwu4/320/base 2025-09-07T07:34:58.4154744Z * [new branch] gh/ydwu4/320/head -> origin/gh/ydwu4/320/head 2025-09-07T07:34:58.4154870Z * [new branch] gh/ydwu4/320/orig -> origin/gh/ydwu4/320/orig 2025-09-07T07:34:58.4155001Z * [new branch] gh/ydwu4/321/base -> origin/gh/ydwu4/321/base 2025-09-07T07:34:58.4155128Z * [new branch] gh/ydwu4/321/head -> origin/gh/ydwu4/321/head 2025-09-07T07:34:58.4155251Z * [new branch] gh/ydwu4/321/orig -> origin/gh/ydwu4/321/orig 2025-09-07T07:34:58.4155392Z * [new branch] gh/ydwu4/322/base -> origin/gh/ydwu4/322/base 2025-09-07T07:34:58.4155516Z * [new branch] gh/ydwu4/322/head -> origin/gh/ydwu4/322/head 2025-09-07T07:34:58.4160531Z * [new branch] gh/ydwu4/322/orig -> origin/gh/ydwu4/322/orig 2025-09-07T07:34:58.4162925Z * [new branch] gh/ydwu4/323/base -> origin/gh/ydwu4/323/base 2025-09-07T07:34:58.4163106Z * [new branch] gh/ydwu4/323/head -> origin/gh/ydwu4/323/head 2025-09-07T07:34:58.4163251Z * [new branch] gh/ydwu4/323/orig -> origin/gh/ydwu4/323/orig 2025-09-07T07:34:58.4163389Z * [new branch] gh/ydwu4/324/base -> origin/gh/ydwu4/324/base 2025-09-07T07:34:58.4163526Z * [new branch] gh/ydwu4/324/head -> origin/gh/ydwu4/324/head 2025-09-07T07:34:58.4163672Z * [new branch] gh/ydwu4/324/orig -> origin/gh/ydwu4/324/orig 2025-09-07T07:34:58.4164004Z * [new branch] gh/yf225/133/base -> origin/gh/yf225/133/base 2025-09-07T07:34:58.4164146Z * [new branch] gh/yf225/133/head -> origin/gh/yf225/133/head 2025-09-07T07:34:58.4164276Z * [new branch] gh/yf225/171/base -> origin/gh/yf225/171/base 2025-09-07T07:34:58.4164412Z * [new branch] gh/yf225/171/head -> origin/gh/yf225/171/head 2025-09-07T07:34:58.4164553Z * [new branch] gh/yf225/171/orig -> origin/gh/yf225/171/orig 2025-09-07T07:34:58.4164685Z * [new branch] gh/yf225/172/base -> origin/gh/yf225/172/base 2025-09-07T07:34:58.4164822Z * [new branch] gh/yf225/172/head -> origin/gh/yf225/172/head 2025-09-07T07:34:58.4165135Z * [new branch] gh/yf225/172/orig -> origin/gh/yf225/172/orig 2025-09-07T07:34:58.4165305Z * [new branch] gh/yf225/93/base -> origin/gh/yf225/93/base 2025-09-07T07:34:58.4165442Z * [new branch] gh/yf225/93/head -> origin/gh/yf225/93/head 2025-09-07T07:34:58.4171891Z * [new branch] gh/yifuwang/152/base -> origin/gh/yifuwang/152/base 2025-09-07T07:34:58.4172246Z * [new branch] gh/yifuwang/152/head -> origin/gh/yifuwang/152/head 2025-09-07T07:34:58.4172713Z * [new branch] gh/yifuwang/152/orig -> origin/gh/yifuwang/152/orig 2025-09-07T07:34:58.4173014Z * [new branch] gh/yifuwang/195/base -> origin/gh/yifuwang/195/base 2025-09-07T07:34:58.4173597Z * [new branch] gh/yifuwang/195/head -> origin/gh/yifuwang/195/head 2025-09-07T07:34:58.4173767Z * [new branch] gh/yifuwang/195/orig -> origin/gh/yifuwang/195/orig 2025-09-07T07:34:58.4173914Z * [new branch] gh/yiming0416/1/base -> origin/gh/yiming0416/1/base 2025-09-07T07:34:58.4174077Z * [new branch] gh/yiming0416/1/head -> origin/gh/yiming0416/1/head 2025-09-07T07:34:58.4174316Z * [new branch] gh/yiming0416/2/base -> origin/gh/yiming0416/2/base 2025-09-07T07:34:58.4174483Z * [new branch] gh/yiming0416/2/head -> origin/gh/yiming0416/2/head 2025-09-07T07:34:58.4175898Z * [new branch] gh/ysiraichi/79/base -> origin/gh/ysiraichi/79/base 2025-09-07T07:34:58.4176075Z * [new branch] gh/ysiraichi/79/head -> origin/gh/ysiraichi/79/head 2025-09-07T07:34:58.4178597Z * [new branch] gh/ysiraichi/79/orig -> origin/gh/ysiraichi/79/orig 2025-09-07T07:34:58.4182920Z * [new branch] gh/ysiraichi/88/base -> origin/gh/ysiraichi/88/base 2025-09-07T07:34:58.4187320Z * [new branch] gh/ysiraichi/88/head -> origin/gh/ysiraichi/88/head 2025-09-07T07:34:58.4192535Z * [new branch] gh/ysiraichi/88/orig -> origin/gh/ysiraichi/88/orig 2025-09-07T07:34:58.4192855Z * [new branch] gh/zhxchen17/25/base -> origin/gh/zhxchen17/25/base 2025-09-07T07:34:58.4193065Z * [new branch] gh/zhxchen17/25/head -> origin/gh/zhxchen17/25/head 2025-09-07T07:34:58.4193277Z * [new branch] gh/zhxchen17/25/orig -> origin/gh/zhxchen17/25/orig 2025-09-07T07:34:58.4193436Z * [new branch] gh/zhxchen17/31/base -> origin/gh/zhxchen17/31/base 2025-09-07T07:34:58.4193574Z * [new branch] gh/zhxchen17/31/head -> origin/gh/zhxchen17/31/head 2025-09-07T07:34:58.4193827Z * [new branch] gh/zhxchen17/31/orig -> origin/gh/zhxchen17/31/orig 2025-09-07T07:34:58.4193970Z * [new branch] gh/zhxchen17/34/base -> origin/gh/zhxchen17/34/base 2025-09-07T07:34:58.4194213Z * [new branch] gh/zhxchen17/34/head -> origin/gh/zhxchen17/34/head 2025-09-07T07:34:58.4194356Z * [new branch] gh/zhxchen17/35/base -> origin/gh/zhxchen17/35/base 2025-09-07T07:34:58.4194755Z * [new branch] gh/zhxchen17/35/head -> origin/gh/zhxchen17/35/head 2025-09-07T07:34:58.4194902Z * [new branch] gh/zhxchen17/37/base -> origin/gh/zhxchen17/37/base 2025-09-07T07:34:58.4195569Z * [new branch] gh/zhxchen17/37/head -> origin/gh/zhxchen17/37/head 2025-09-07T07:34:58.4195808Z * [new branch] gh/zhxchen17/37/orig -> origin/gh/zhxchen17/37/orig 2025-09-07T07:34:58.4195968Z * [new branch] gh/zhxchen17/38/base -> origin/gh/zhxchen17/38/base 2025-09-07T07:34:58.4196122Z * [new branch] gh/zhxchen17/38/head -> origin/gh/zhxchen17/38/head 2025-09-07T07:34:58.4196284Z * [new branch] gh/zhxchen17/38/orig -> origin/gh/zhxchen17/38/orig 2025-09-07T07:34:58.4196437Z * [new branch] gh/zhxchen17/39/base -> origin/gh/zhxchen17/39/base 2025-09-07T07:34:58.4196576Z * [new branch] gh/zhxchen17/39/head -> origin/gh/zhxchen17/39/head 2025-09-07T07:34:58.4196750Z * [new branch] gh/zhxchen17/39/orig -> origin/gh/zhxchen17/39/orig 2025-09-07T07:34:58.4196950Z * [new branch] gh/zhxchen17/40/base -> origin/gh/zhxchen17/40/base 2025-09-07T07:34:58.4197083Z * [new branch] gh/zhxchen17/40/head -> origin/gh/zhxchen17/40/head 2025-09-07T07:34:58.4197406Z * [new branch] gh/zhxchen17/40/orig -> origin/gh/zhxchen17/40/orig 2025-09-07T07:34:58.4197598Z * [new branch] gh/zhxchen17/41/base -> origin/gh/zhxchen17/41/base 2025-09-07T07:34:58.4197740Z * [new branch] gh/zhxchen17/41/head -> origin/gh/zhxchen17/41/head 2025-09-07T07:34:58.4204050Z * [new branch] gh/zhxchen17/41/orig -> origin/gh/zhxchen17/41/orig 2025-09-07T07:34:58.4204225Z * [new branch] gh/zhxchen17/42/base -> origin/gh/zhxchen17/42/base 2025-09-07T07:34:58.4204427Z * [new branch] gh/zhxchen17/42/head -> origin/gh/zhxchen17/42/head 2025-09-07T07:34:58.4204568Z * [new branch] gh/zhxchen17/42/orig -> origin/gh/zhxchen17/42/orig 2025-09-07T07:34:58.4204730Z * [new branch] gh/zhxchen17/43/base -> origin/gh/zhxchen17/43/base 2025-09-07T07:34:58.4204870Z * [new branch] gh/zhxchen17/43/head -> origin/gh/zhxchen17/43/head 2025-09-07T07:34:58.4205010Z * [new branch] gh/zhxchen17/43/orig -> origin/gh/zhxchen17/43/orig 2025-09-07T07:34:58.4205156Z * [new branch] gh/zhxchen17/44/base -> origin/gh/zhxchen17/44/base 2025-09-07T07:34:58.4205296Z * [new branch] gh/zhxchen17/44/head -> origin/gh/zhxchen17/44/head 2025-09-07T07:34:58.4205604Z * [new branch] gh/zhxchen17/44/orig -> origin/gh/zhxchen17/44/orig 2025-09-07T07:34:58.4206233Z * [new branch] gh/zhxchen17/45/base -> origin/gh/zhxchen17/45/base 2025-09-07T07:34:58.4207517Z * [new branch] gh/zhxchen17/45/head -> origin/gh/zhxchen17/45/head 2025-09-07T07:34:58.4207799Z * [new branch] gh/zhxchen17/45/orig -> origin/gh/zhxchen17/45/orig 2025-09-07T07:34:58.4211896Z * [new branch] gh/zklaus/10/base -> origin/gh/zklaus/10/base 2025-09-07T07:34:58.4212213Z * [new branch] gh/zklaus/10/head -> origin/gh/zklaus/10/head 2025-09-07T07:34:58.4212414Z * [new branch] gh/zklaus/10/orig -> origin/gh/zklaus/10/orig 2025-09-07T07:34:58.4212633Z * [new branch] gh/zklaus/11/base -> origin/gh/zklaus/11/base 2025-09-07T07:34:58.4212787Z * [new branch] gh/zklaus/11/head -> origin/gh/zklaus/11/head 2025-09-07T07:34:58.4213003Z * [new branch] gh/zklaus/11/orig -> origin/gh/zklaus/11/orig 2025-09-07T07:34:58.4213799Z * [new branch] gh/zklaus/12/base -> origin/gh/zklaus/12/base 2025-09-07T07:34:58.4214208Z * [new branch] gh/zklaus/12/head -> origin/gh/zklaus/12/head 2025-09-07T07:34:58.4218672Z * [new branch] gh/zklaus/12/orig -> origin/gh/zklaus/12/orig 2025-09-07T07:34:58.4218842Z * [new branch] gh/zklaus/14/base -> origin/gh/zklaus/14/base 2025-09-07T07:34:58.4218976Z * [new branch] gh/zklaus/14/head -> origin/gh/zklaus/14/head 2025-09-07T07:34:58.4219126Z * [new branch] gh/zklaus/14/orig -> origin/gh/zklaus/14/orig 2025-09-07T07:34:58.4219251Z * [new branch] gh/zklaus/15/base -> origin/gh/zklaus/15/base 2025-09-07T07:34:58.4219386Z * [new branch] gh/zklaus/15/head -> origin/gh/zklaus/15/head 2025-09-07T07:34:58.4219650Z * [new branch] gh/zklaus/15/orig -> origin/gh/zklaus/15/orig 2025-09-07T07:34:58.4221209Z * [new branch] gh/zklaus/16/base -> origin/gh/zklaus/16/base 2025-09-07T07:34:58.4221552Z * [new branch] gh/zklaus/16/head -> origin/gh/zklaus/16/head 2025-09-07T07:34:58.4221844Z * [new branch] gh/zklaus/16/orig -> origin/gh/zklaus/16/orig 2025-09-07T07:34:58.4224655Z * [new branch] gh/zklaus/17/base -> origin/gh/zklaus/17/base 2025-09-07T07:34:58.4224973Z * [new branch] gh/zklaus/17/head -> origin/gh/zklaus/17/head 2025-09-07T07:34:58.4225276Z * [new branch] gh/zklaus/17/orig -> origin/gh/zklaus/17/orig 2025-09-07T07:34:58.4225507Z * [new branch] gh/zklaus/18/base -> origin/gh/zklaus/18/base 2025-09-07T07:34:58.4225914Z * [new branch] gh/zklaus/18/head -> origin/gh/zklaus/18/head 2025-09-07T07:34:58.4226268Z * [new branch] gh/zklaus/18/orig -> origin/gh/zklaus/18/orig 2025-09-07T07:34:58.4226965Z * [new branch] gh/zklaus/19/base -> origin/gh/zklaus/19/base 2025-09-07T07:34:58.4229604Z * [new branch] gh/zklaus/19/head -> origin/gh/zklaus/19/head 2025-09-07T07:34:58.4229915Z * [new branch] gh/zklaus/19/orig -> origin/gh/zklaus/19/orig 2025-09-07T07:34:58.4234634Z * [new branch] gh/zklaus/20/base -> origin/gh/zklaus/20/base 2025-09-07T07:34:58.4234961Z * [new branch] gh/zklaus/20/head -> origin/gh/zklaus/20/head 2025-09-07T07:34:58.4235161Z * [new branch] gh/zklaus/20/orig -> origin/gh/zklaus/20/orig 2025-09-07T07:34:58.4235383Z * [new branch] gh/zklaus/7/base -> origin/gh/zklaus/7/base 2025-09-07T07:34:58.4235535Z * [new branch] gh/zklaus/7/head -> origin/gh/zklaus/7/head 2025-09-07T07:34:58.4235762Z * [new branch] gh/zklaus/7/orig -> origin/gh/zklaus/7/orig 2025-09-07T07:34:58.4236119Z * [new branch] gh/zklaus/9/base -> origin/gh/zklaus/9/base 2025-09-07T07:34:58.4236263Z * [new branch] gh/zklaus/9/head -> origin/gh/zklaus/9/head 2025-09-07T07:34:58.4236401Z * [new branch] gh/zklaus/9/orig -> origin/gh/zklaus/9/orig 2025-09-07T07:34:58.4236553Z * [new branch] gh/zou3519/1175/base -> origin/gh/zou3519/1175/base 2025-09-07T07:34:58.4236688Z * [new branch] gh/zou3519/1175/head -> origin/gh/zou3519/1175/head 2025-09-07T07:34:58.4236828Z * [new branch] gh/zou3519/1175/orig -> origin/gh/zou3519/1175/orig 2025-09-07T07:34:58.4242069Z * [new branch] gh/zou3519/1177/base -> origin/gh/zou3519/1177/base 2025-09-07T07:34:58.4242250Z * [new branch] gh/zou3519/1177/head -> origin/gh/zou3519/1177/head 2025-09-07T07:34:58.4242396Z * [new branch] gh/zou3519/1177/orig -> origin/gh/zou3519/1177/orig 2025-09-07T07:34:58.4242532Z * [new branch] gh/zou3519/1191/base -> origin/gh/zou3519/1191/base 2025-09-07T07:34:58.4242894Z * [new branch] gh/zou3519/1191/head -> origin/gh/zou3519/1191/head 2025-09-07T07:34:58.4243036Z * [new branch] gh/zou3519/1191/orig -> origin/gh/zou3519/1191/orig 2025-09-07T07:34:58.4243170Z * [new branch] gh/zou3519/1192/base -> origin/gh/zou3519/1192/base 2025-09-07T07:34:58.4243319Z * [new branch] gh/zou3519/1192/head -> origin/gh/zou3519/1192/head 2025-09-07T07:34:58.4243453Z * [new branch] gh/zou3519/1192/orig -> origin/gh/zou3519/1192/orig 2025-09-07T07:34:58.4243588Z * [new branch] gh/zou3519/1193/base -> origin/gh/zou3519/1193/base 2025-09-07T07:34:58.4244009Z * [new branch] gh/zou3519/1193/head -> origin/gh/zou3519/1193/head 2025-09-07T07:34:58.4245839Z * [new branch] gh/zou3519/1193/orig -> origin/gh/zou3519/1193/orig 2025-09-07T07:34:58.4246163Z * [new branch] gh/zou3519/1194/base -> origin/gh/zou3519/1194/base 2025-09-07T07:34:58.4253309Z * [new branch] gh/zou3519/1194/head -> origin/gh/zou3519/1194/head 2025-09-07T07:34:58.4256393Z * [new branch] gh/zou3519/1194/orig -> origin/gh/zou3519/1194/orig 2025-09-07T07:34:58.4262153Z * [new branch] gh/zou3519/1195/base -> origin/gh/zou3519/1195/base 2025-09-07T07:34:58.4264380Z * [new branch] gh/zou3519/1195/head -> origin/gh/zou3519/1195/head 2025-09-07T07:34:58.4264680Z * [new branch] gh/zou3519/1195/orig -> origin/gh/zou3519/1195/orig 2025-09-07T07:34:58.4270288Z * [new branch] gh/zou3519/1196/base -> origin/gh/zou3519/1196/base 2025-09-07T07:34:58.4272217Z * [new branch] gh/zou3519/1196/head -> origin/gh/zou3519/1196/head 2025-09-07T07:34:58.4272554Z * [new branch] gh/zou3519/1196/orig -> origin/gh/zou3519/1196/orig 2025-09-07T07:34:58.4272819Z * [new branch] gh/zou3519/1197/base -> origin/gh/zou3519/1197/base 2025-09-07T07:34:58.4272979Z * [new branch] gh/zou3519/1197/head -> origin/gh/zou3519/1197/head 2025-09-07T07:34:58.4273204Z * [new branch] gh/zou3519/1197/orig -> origin/gh/zou3519/1197/orig 2025-09-07T07:34:58.4273456Z * [new branch] gh/zpcore/1/base -> origin/gh/zpcore/1/base 2025-09-07T07:34:58.4273598Z * [new branch] gh/zpcore/1/head -> origin/gh/zpcore/1/head 2025-09-07T07:34:58.4273874Z * [new branch] gh/zpcore/10/base -> origin/gh/zpcore/10/base 2025-09-07T07:34:58.4274028Z * [new branch] gh/zpcore/10/head -> origin/gh/zpcore/10/head 2025-09-07T07:34:58.4274261Z * [new branch] gh/zpcore/10/orig -> origin/gh/zpcore/10/orig 2025-09-07T07:34:58.4274414Z * [new branch] gh/zpcore/11/base -> origin/gh/zpcore/11/base 2025-09-07T07:34:58.4274548Z * [new branch] gh/zpcore/11/head -> origin/gh/zpcore/11/head 2025-09-07T07:34:58.4274812Z * [new branch] gh/zpcore/11/orig -> origin/gh/zpcore/11/orig 2025-09-07T07:34:58.4274957Z * [new branch] gh/zpcore/12/base -> origin/gh/zpcore/12/base 2025-09-07T07:34:58.4275095Z * [new branch] gh/zpcore/12/head -> origin/gh/zpcore/12/head 2025-09-07T07:34:58.4275223Z * [new branch] gh/zpcore/12/orig -> origin/gh/zpcore/12/orig 2025-09-07T07:34:58.4275355Z * [new branch] gh/zpcore/13/base -> origin/gh/zpcore/13/base 2025-09-07T07:34:58.4275498Z * [new branch] gh/zpcore/13/head -> origin/gh/zpcore/13/head 2025-09-07T07:34:58.4275631Z * [new branch] gh/zpcore/13/orig -> origin/gh/zpcore/13/orig 2025-09-07T07:34:58.4275757Z * [new branch] gh/zpcore/14/base -> origin/gh/zpcore/14/base 2025-09-07T07:34:58.4276095Z * [new branch] gh/zpcore/14/head -> origin/gh/zpcore/14/head 2025-09-07T07:34:58.4276238Z * [new branch] gh/zpcore/2/base -> origin/gh/zpcore/2/base 2025-09-07T07:34:58.4276365Z * [new branch] gh/zpcore/2/head -> origin/gh/zpcore/2/head 2025-09-07T07:34:58.4276499Z * [new branch] gh/zpcore/3/base -> origin/gh/zpcore/3/base 2025-09-07T07:34:58.4276628Z * [new branch] gh/zpcore/3/head -> origin/gh/zpcore/3/head 2025-09-07T07:34:58.4276762Z * [new branch] gh/zpcore/4/base -> origin/gh/zpcore/4/base 2025-09-07T07:34:58.4276893Z * [new branch] gh/zpcore/4/head -> origin/gh/zpcore/4/head 2025-09-07T07:34:58.4277208Z * [new branch] gh/zpcore/5/base -> origin/gh/zpcore/5/base 2025-09-07T07:34:58.4280817Z * [new branch] gh/zpcore/5/head -> origin/gh/zpcore/5/head 2025-09-07T07:34:58.4281154Z * [new branch] gh/zpcore/6/base -> origin/gh/zpcore/6/base 2025-09-07T07:34:58.4281456Z * [new branch] gh/zpcore/6/head -> origin/gh/zpcore/6/head 2025-09-07T07:34:58.4281606Z * [new branch] gh/zpcore/7/base -> origin/gh/zpcore/7/base 2025-09-07T07:34:58.4281741Z * [new branch] gh/zpcore/7/head -> origin/gh/zpcore/7/head 2025-09-07T07:34:58.4282140Z * [new branch] gh/zpcore/8/base -> origin/gh/zpcore/8/base 2025-09-07T07:34:58.4282416Z * [new branch] gh/zpcore/8/head -> origin/gh/zpcore/8/head 2025-09-07T07:34:58.4283771Z * [new branch] google-main -> origin/google-main 2025-09-07T07:34:58.4284248Z * [new branch] guangyey/external_stream -> origin/guangyey/external_stream 2025-09-07T07:34:58.4284812Z * [new branch] guangyey/host_alloc -> origin/guangyey/host_alloc 2025-09-07T07:34:58.4285304Z * [new branch] guangyey/reimport -> origin/guangyey/reimport 2025-09-07T07:34:58.4286320Z * [new branch] guangyey/test_2025 -> origin/guangyey/test_2025 2025-09-07T07:34:58.4287776Z * [new branch] guilhermeleobas/cherry-pick-55d87d9dfd9 -> origin/guilhermeleobas/cherry-pick-55d87d9dfd9 2025-09-07T07:34:58.4288046Z * [new branch] haozhe/bf16-dynamic-shape -> origin/haozhe/bf16-dynamic-shape 2025-09-07T07:34:58.4291040Z * [new branch] hc_baseline -> origin/hc_baseline 2025-09-07T07:34:58.4291366Z * [new branch] hf_update -> origin/hf_update 2025-09-07T07:34:58.4291550Z * [new branch] hhh_decomp_mul -> origin/hhh_decomp_mul 2025-09-07T07:34:58.4291680Z * [new branch] hhh_rand -> origin/hhh_rand 2025-09-07T07:34:58.4295925Z * [new branch] hoy/mmsplitk -> origin/hoy/mmsplitk 2025-09-07T07:34:58.4296280Z * [new branch] hoy/triton-PR3973 -> origin/hoy/triton-PR3973 2025-09-07T07:34:58.4296574Z * [new branch] hoy/triton-coalescing-baseline -> origin/hoy/triton-coalescing-baseline 2025-09-07T07:34:58.4296860Z * [new branch] hoy/triton-coalescing-new -> origin/hoy/triton-coalescing-new 2025-09-07T07:34:58.4297040Z * [new branch] hoy/triton-coalescing-vec -> origin/hoy/triton-coalescing-vec 2025-09-07T07:34:58.4297286Z * [new branch] inductordecompfix -> origin/inductordecompfix 2025-09-07T07:34:58.4297912Z * [new branch] inline -> origin/inline 2025-09-07T07:34:58.4298068Z * [new branch] inlining -> origin/inlining 2025-09-07T07:34:58.4298228Z * [new branch] inlining-ezyang -> origin/inlining-ezyang 2025-09-07T07:34:58.4298791Z * [new branch] install-torchao-0.13.0 -> origin/install-torchao-0.13.0 2025-09-07T07:34:58.4299364Z * [new branch] int8_sdpa -> origin/int8_sdpa 2025-09-07T07:34:58.4302755Z * [new branch] invoke-subgraph -> origin/invoke-subgraph 2025-09-07T07:34:58.4303216Z * [new branch] issue#58739 -> origin/issue#58739 2025-09-07T07:34:58.4303600Z * [new branch] jcaip/test-cusparselt-version-0.6.2 -> origin/jcaip/test-cusparselt-version-0.6.2 2025-09-07T07:34:58.4303939Z * [new branch] jcaip/update-cusparselt-0.6.2 -> origin/jcaip/update-cusparselt-0.6.2 2025-09-07T07:34:58.4304675Z * [new branch] jeanschmidt/disable_rocm_build_tests -> origin/jeanschmidt/disable_rocm_build_tests 2025-09-07T07:34:58.4304890Z * [new branch] jithunnair-amd-patch-1 -> origin/jithunnair-amd-patch-1 2025-09-07T07:34:58.4305063Z * [new branch] jithunnair-amd-patch-2 -> origin/jithunnair-amd-patch-2 2025-09-07T07:34:58.4305685Z * [new branch] justinchu/attention-tests -> origin/justinchu/attention-tests 2025-09-07T07:34:58.4308100Z * [new branch] justinchu/native-qdq -> origin/justinchu/native-qdq 2025-09-07T07:34:58.4308380Z * [new branch] justinchu/ort-122 -> origin/justinchu/ort-122 2025-09-07T07:34:58.4308700Z * [new branch] justinchuby/dynamo-true -> origin/justinchuby/dynamo-true 2025-09-07T07:34:58.4309005Z * [new branch] kainan666/xlf_debug -> origin/kainan666/xlf_debug 2025-09-07T07:34:58.4309377Z * [new branch] kainan_test -> origin/kainan_test 2025-09-07T07:34:58.4310289Z * [new branch] learnablebias -> origin/learnablebias 2025-09-07T07:34:58.4313700Z * [new branch] leslie/test_group_gemm_epilogues -> origin/leslie/test_group_gemm_epilogues 2025-09-07T07:34:58.4314086Z * [new branch] lessw2020/fix_cutlass_cache_error -> origin/lessw2020/fix_cutlass_cache_error 2025-09-07T07:34:58.4314382Z * [new branch] liaoxuan/shm_all_reduce -> origin/liaoxuan/shm_all_reduce 2025-09-07T07:34:58.4314608Z * [new branch] liaoxuan/test_fa_disable_softmax -> origin/liaoxuan/test_fa_disable_softmax 2025-09-07T07:34:58.4314855Z * [new branch] liaoxuan/test_int8_sdpa -> origin/liaoxuan/test_int8_sdpa 2025-09-07T07:34:58.4315021Z * [new branch] lintbuilddocker -> origin/lintbuilddocker 2025-09-07T07:34:58.4315250Z * [new branch] llama4-stable -> origin/llama4-stable 2025-09-07T07:34:58.4316224Z * [new branch] logdetfix -> origin/logdetfix 2025-09-07T07:34:58.4318606Z * [new branch] lts/release/1.8 -> origin/lts/release/1.8 2025-09-07T07:34:58.4318794Z * [new branch] lucaskabela/#94773 -> origin/lucaskabela/#94773 2025-09-07T07:34:58.4318964Z * [new branch] lucaskabela/flop_counter -> origin/lucaskabela/flop_counter 2025-09-07T07:34:58.4319224Z * [new branch] lucaskabela/func_under_decomp -> origin/lucaskabela/func_under_decomp 2025-09-07T07:34:58.4320141Z * [new branch] lucaskabela/functional_in_dynamo -> origin/lucaskabela/functional_in_dynamo 2025-09-07T07:34:58.4320647Z * [new branch] lucaskabela/install_params_as_graph_attr -> origin/lucaskabela/install_params_as_graph_attr 2025-09-07T07:34:58.4321279Z * [new branch] lucaskabela/issue_120648 -> origin/lucaskabela/issue_120648 2025-09-07T07:34:58.4322411Z * [new branch] lucaskabela/misc_typing_dynamo -> origin/lucaskabela/misc_typing_dynamo 2025-09-07T07:34:58.4323038Z * [new branch] lucaskabela/parameters_as_graph_attr -> origin/lucaskabela/parameters_as_graph_attr 2025-09-07T07:34:58.4324023Z * [new branch] lucaskabela/remove_aot_dispatcher_metadata -> origin/lucaskabela/remove_aot_dispatcher_metadata 2025-09-07T07:34:58.4324428Z * [new branch] lucaskabela/rnn_decomp -> origin/lucaskabela/rnn_decomp 2025-09-07T07:34:58.4325437Z * [new branch] lucaskabela/typing_backends -> origin/lucaskabela/typing_backends 2025-09-07T07:34:58.4325858Z * [new branch] lucaskabela/typing_symbolic_convert -> origin/lucaskabela/typing_symbolic_convert 2025-09-07T07:34:58.4326914Z * [new branch] lucaskabela/typing_utils_improvements -> origin/lucaskabela/typing_utils_improvements 2025-09-07T07:34:58.4327591Z * [new branch] main -> origin/main 2025-09-07T07:34:58.4329011Z * [new branch] main-enable-b200-distributed-tests -> origin/main-enable-b200-distributed-tests 2025-09-07T07:34:58.4329184Z * [new branch] malfet-patch-1 -> origin/malfet-patch-1 2025-09-07T07:34:58.4329816Z * [new branch] malfet-patch-12 -> origin/malfet-patch-12 2025-09-07T07:34:58.4330839Z * [new branch] malfet-patch-14 -> origin/malfet-patch-14 2025-09-07T07:34:58.4331389Z * [new branch] malfet-patch-6 -> origin/malfet-patch-6 2025-09-07T07:34:58.4332420Z * [new branch] malfet-patch-8 -> origin/malfet-patch-8 2025-09-07T07:34:58.4333864Z * [new branch] malfet/be-move-more-settings-to-checkout-pytorch -> origin/malfet/be-move-more-settings-to-checkout-pytorch 2025-09-07T07:34:58.4334145Z * [new branch] malfet/delete-upsteam-cuda -> origin/malfet/delete-upsteam-cuda 2025-09-07T07:34:58.4334643Z * [new branch] malfet/mps-implement-col2im -> origin/malfet/mps-implement-col2im 2025-09-07T07:34:58.4336405Z * [new branch] manuel/test-ops-common-allow-mps -> origin/manuel/test-ops-common-allow-mps 2025-09-07T07:34:58.4336613Z * [new branch] metascroy-patch-1 -> origin/metascroy-patch-1 2025-09-07T07:34:58.4337353Z * [new branch] mlazos/S429861-debug -> origin/mlazos/S429861-debug 2025-09-07T07:34:58.4337940Z * [new branch] mlazos/aa -> origin/mlazos/aa 2025-09-07T07:34:58.4338535Z * [new branch] mlazos/arg-renames -> origin/mlazos/arg-renames 2025-09-07T07:34:58.4339236Z * [new branch] mlazos/backup-test-branch -> origin/mlazos/backup-test-branch 2025-09-07T07:34:58.4339753Z * [new branch] mlazos/bad-cudagraphs -> origin/mlazos/bad-cudagraphs 2025-09-07T07:34:58.4340594Z * [new branch] mlazos/baseline -> origin/mlazos/baseline 2025-09-07T07:34:58.4341003Z * [new branch] mlazos/baseline-graph-breaks -> origin/mlazos/baseline-graph-breaks 2025-09-07T07:34:58.4342080Z * [new branch] mlazos/beta-tensor -> origin/mlazos/beta-tensor 2025-09-07T07:34:58.4342662Z * [new branch] mlazos/better-msg -> origin/mlazos/better-msg 2025-09-07T07:34:58.4343978Z * [new branch] mlazos/buffers -> origin/mlazos/buffers 2025-09-07T07:34:58.4344136Z * [new branch] mlazos/buffers2 -> origin/mlazos/buffers2 2025-09-07T07:34:58.4344663Z * [new branch] mlazos/buffers3 -> origin/mlazos/buffers3 2025-09-07T07:34:58.4348795Z * [new branch] mlazos/ck2 -> origin/mlazos/ck2 2025-09-07T07:34:58.4348983Z * [new branch] mlazos/combokernels -> origin/mlazos/combokernels 2025-09-07T07:34:58.4349163Z * [new branch] mlazos/ctx-cleanup -> origin/mlazos/ctx-cleanup 2025-09-07T07:34:58.4349311Z * [new branch] mlazos/cuda-cmd-log -> origin/mlazos/cuda-cmd-log 2025-09-07T07:34:58.4349478Z * [new branch] mlazos/cudagraph-tests -> origin/mlazos/cudagraph-tests 2025-09-07T07:34:58.4349839Z * [new branch] mlazos/cudagraphs-measurement -> origin/mlazos/cudagraphs-measurement 2025-09-07T07:34:58.4350250Z * [new branch] mlazos/cutlass-test -> origin/mlazos/cutlass-test 2025-09-07T07:34:58.4350891Z * [new branch] mlazos/cutlass-topo-bug -> origin/mlazos/cutlass-topo-bug 2025-09-07T07:34:58.4352377Z * [new branch] mlazos/data-gather -> origin/mlazos/data-gather 2025-09-07T07:34:58.4352752Z * [new branch] mlazos/data-ptrs2 -> origin/mlazos/data-ptrs2 2025-09-07T07:34:58.4353028Z * [new branch] mlazos/data-ptrs3 -> origin/mlazos/data-ptrs3 2025-09-07T07:34:58.4353444Z * [new branch] mlazos/dataclass-proxy -> origin/mlazos/dataclass-proxy 2025-09-07T07:34:58.4355414Z * [new branch] mlazos/dc-attrs -> origin/mlazos/dc-attrs 2025-09-07T07:34:58.4355749Z * [new branch] mlazos/dc-helion -> origin/mlazos/dc-helion 2025-09-07T07:34:58.4355984Z * [new branch] mlazos/dict-fix -> origin/mlazos/dict-fix 2025-09-07T07:34:58.4356193Z * [new branch] mlazos/disable-closures -> origin/mlazos/disable-closures 2025-09-07T07:34:58.4357928Z * [new branch] mlazos/disable-tf -> origin/mlazos/disable-tf 2025-09-07T07:34:58.4358109Z * [new branch] mlazos/dupe-fix -> origin/mlazos/dupe-fix 2025-09-07T07:34:58.4358365Z * [new branch] mlazos/dyn-batch -> origin/mlazos/dyn-batch 2025-09-07T07:34:58.4359516Z * [new branch] mlazos/evt -> origin/mlazos/evt 2025-09-07T07:34:58.4359680Z * [new branch] mlazos/exp_disable -> origin/mlazos/exp_disable 2025-09-07T07:34:58.4360552Z * [new branch] mlazos/extract-examples -> origin/mlazos/extract-examples 2025-09-07T07:34:58.4360812Z * [new branch] mlazos/foreach-op -> origin/mlazos/foreach-op 2025-09-07T07:34:58.4361839Z * [new branch] mlazos/fp8 -> origin/mlazos/fp8 2025-09-07T07:34:58.4362217Z * [new branch] mlazos/fp8-bias -> origin/mlazos/fp8-bias 2025-09-07T07:34:58.4363270Z * [new branch] mlazos/fp8-bias-fusion -> origin/mlazos/fp8-bias-fusion 2025-09-07T07:34:58.4363623Z * [new branch] mlazos/fp8-fixes -> origin/mlazos/fp8-fixes 2025-09-07T07:34:58.4364454Z * [new branch] mlazos/freezing -> origin/mlazos/freezing 2025-09-07T07:34:58.4364824Z * [new branch] mlazos/h-comp -> origin/mlazos/h-comp 2025-09-07T07:34:58.4365854Z * [new branch] mlazos/h-comp2 -> origin/mlazos/h-comp2 2025-09-07T07:34:58.4366088Z * [new branch] mlazos/hash-hop -> origin/mlazos/hash-hop 2025-09-07T07:34:58.4367378Z * [new branch] mlazos/hc -> origin/mlazos/hc 2025-09-07T07:34:58.4367617Z * [new branch] mlazos/hc-cycles -> origin/mlazos/hc-cycles 2025-09-07T07:34:58.4372179Z * [new branch] mlazos/hc-fixes -> origin/mlazos/hc-fixes 2025-09-07T07:34:58.4372415Z * [new branch] mlazos/hc-fixes3 -> origin/mlazos/hc-fixes3 2025-09-07T07:34:58.4372564Z * [new branch] mlazos/hc-fixes4 -> origin/mlazos/hc-fixes4 2025-09-07T07:34:58.4372702Z * [new branch] mlazos/hc-hf -> origin/mlazos/hc-hf 2025-09-07T07:34:58.4372827Z * [new branch] mlazos/hc-mut -> origin/mlazos/hc-mut 2025-09-07T07:34:58.4372985Z * [new branch] mlazos/hc10 -> origin/mlazos/hc10 2025-09-07T07:34:58.4373105Z * [new branch] mlazos/hc11 -> origin/mlazos/hc11 2025-09-07T07:34:58.4373459Z * [new branch] mlazos/hc12 -> origin/mlazos/hc12 2025-09-07T07:34:58.4373708Z * [new branch] mlazos/hc13 -> origin/mlazos/hc13 2025-09-07T07:34:58.4374603Z * [new branch] mlazos/hc14 -> origin/mlazos/hc14 2025-09-07T07:34:58.4374924Z * [new branch] mlazos/hc15 -> origin/mlazos/hc15 2025-09-07T07:34:58.4378193Z * [new branch] mlazos/hc2 -> origin/mlazos/hc2 2025-09-07T07:34:58.4378343Z * [new branch] mlazos/hc4 -> origin/mlazos/hc4 2025-09-07T07:34:58.4378458Z * [new branch] mlazos/hc5 -> origin/mlazos/hc5 2025-09-07T07:34:58.4378599Z * [new branch] mlazos/hc6 -> origin/mlazos/hc6 2025-09-07T07:34:58.4378713Z * [new branch] mlazos/hc7 -> origin/mlazos/hc7 2025-09-07T07:34:58.4378869Z * [new branch] mlazos/hc8 -> origin/mlazos/hc8 2025-09-07T07:34:58.4380449Z * [new branch] mlazos/hc9 -> origin/mlazos/hc9 2025-09-07T07:34:58.4380618Z * [new branch] mlazos/hc_baseline2 -> origin/mlazos/hc_baseline2 2025-09-07T07:34:58.4380867Z * [new branch] mlazos/init-per-param -> origin/mlazos/init-per-param 2025-09-07T07:34:58.4383046Z * [new branch] mlazos/init_per_param -> origin/mlazos/init_per_param 2025-09-07T07:34:58.4383239Z * [new branch] mlazos/less-guards -> origin/mlazos/less-guards 2025-09-07T07:34:58.4383405Z * [new branch] mlazos/lr-composibility -> origin/mlazos/lr-composibility 2025-09-07T07:34:58.4383734Z * [new branch] mlazos/main -> origin/mlazos/main 2025-09-07T07:34:58.4388959Z * [new branch] mlazos/main-test-enablement -> origin/mlazos/main-test-enablement 2025-09-07T07:34:58.4389135Z * [new branch] mlazos/main2 -> origin/mlazos/main2 2025-09-07T07:34:58.4389318Z * [new branch] mlazos/mark-static-update -> origin/mlazos/mark-static-update 2025-09-07T07:34:58.4389447Z * [new branch] mlazos/mcg -> origin/mlazos/mcg 2025-09-07T07:34:58.4389617Z * [new branch] mlazos/mcg2 -> origin/mlazos/mcg2 2025-09-07T07:34:58.4389765Z * [new branch] mlazos/meta-guards -> origin/mlazos/meta-guards 2025-09-07T07:34:58.4389912Z * [new branch] mlazos/mlazos/ck2 -> origin/mlazos/mlazos/ck2 2025-09-07T07:34:58.4395189Z * [new branch] mlazos/mlazos/foreach-map-adam -> origin/mlazos/mlazos/foreach-map-adam 2025-09-07T07:34:58.4395576Z * [new branch] mlazos/mlazos/tf-mode-backup -> origin/mlazos/mlazos/tf-mode-backup 2025-09-07T07:34:58.4395879Z * [new branch] mlazos/mod-fix -> origin/mlazos/mod-fix 2025-09-07T07:34:58.4396056Z * [new branch] mlazos/mode-fix -> origin/mlazos/mode-fix 2025-09-07T07:34:58.4396206Z * [new branch] mlazos/more-tests -> origin/mlazos/more-tests 2025-09-07T07:34:58.4396359Z * [new branch] mlazos/no-cpp -> origin/mlazos/no-cpp 2025-09-07T07:34:58.4396722Z * [new branch] mlazos/no-init-group-handling -> origin/mlazos/no-init-group-handling 2025-09-07T07:34:58.4396871Z * [new branch] mlazos/offsets -> origin/mlazos/offsets 2025-09-07T07:34:58.4397033Z * [new branch] mlazos/opt-bench-exp2 -> origin/mlazos/opt-bench-exp2 2025-09-07T07:34:58.4397182Z * [new branch] mlazos/opt-incr -> origin/mlazos/opt-incr 2025-09-07T07:34:58.4397338Z * [new branch] mlazos/proxy-ctors -> origin/mlazos/proxy-ctors 2025-09-07T07:34:58.4397486Z * [new branch] mlazos/quant-fix -> origin/mlazos/quant-fix 2025-09-07T07:34:58.4397641Z * [new branch] mlazos/resnet-fix -> origin/mlazos/resnet-fix 2025-09-07T07:34:58.4400755Z * [new branch] mlazos/revert-inline -> origin/mlazos/revert-inline 2025-09-07T07:34:58.4401139Z * [new branch] mlazos/rm-buf-names -> origin/mlazos/rm-buf-names 2025-09-07T07:34:58.4401956Z * [new branch] mlazos/rm-code -> origin/mlazos/rm-code 2025-09-07T07:34:58.4402145Z * [new branch] mlazos/rm-spam -> origin/mlazos/rm-spam 2025-09-07T07:34:58.4402272Z * [new branch] mlazos/rtp -> origin/mlazos/rtp 2025-09-07T07:34:58.4402441Z * [new branch] mlazos/static-idx-dbg -> origin/mlazos/static-idx-dbg 2025-09-07T07:34:58.4402622Z * [new branch] mlazos/static-inputs-log -> origin/mlazos/static-inputs-log 2025-09-07T07:34:58.4402791Z * [new branch] mlazos/sub-param-fix -> origin/mlazos/sub-param-fix 2025-09-07T07:34:58.4402936Z * [new branch] mlazos/td-fix2 -> origin/mlazos/td-fix2 2025-09-07T07:34:58.4403100Z * [new branch] mlazos/tensor-hasattr2 -> origin/mlazos/tensor-hasattr2 2025-09-07T07:34:58.4403392Z * [new branch] mlazos/test -> origin/mlazos/test 2025-09-07T07:34:58.4403569Z * [new branch] mlazos/tf-mode -> origin/mlazos/tf-mode 2025-09-07T07:34:58.4405330Z * [new branch] mlazos/tf-mode-backup2 -> origin/mlazos/tf-mode-backup2 2025-09-07T07:34:58.4405526Z * [new branch] mlazos/tf-mode-reland -> origin/mlazos/tf-mode-reland 2025-09-07T07:34:58.4405709Z * [new branch] mlazos/tf-mode-reland2 -> origin/mlazos/tf-mode-reland2 2025-09-07T07:34:58.4406458Z * [new branch] mlazos/tf-mode-reland3 -> origin/mlazos/tf-mode-reland3 2025-09-07T07:34:58.4406999Z * [new branch] mlazos/topo-fix -> origin/mlazos/topo-fix 2025-09-07T07:34:58.4407850Z * [new branch] mlazos/triton-no-epi -> origin/mlazos/triton-no-epi 2025-09-07T07:34:58.4408338Z * [new branch] mlazos/tune-proto -> origin/mlazos/tune-proto 2025-09-07T07:34:58.4411651Z * [new branch] mlazos/tuple-fixes -> origin/mlazos/tuple-fixes 2025-09-07T07:34:58.4412030Z * [new branch] mlazos/tuple-fixes2 -> origin/mlazos/tuple-fixes2 2025-09-07T07:34:58.4412293Z * [new branch] mlazos/tuple-handling -> origin/mlazos/tuple-handling 2025-09-07T07:34:58.4412470Z * [new branch] mlazos/user-streams -> origin/mlazos/user-streams 2025-09-07T07:34:58.4412612Z * [new branch] mlazos/vary-beta -> origin/mlazos/vary-beta 2025-09-07T07:34:58.4412895Z * [new branch] mlazos/vary-beta2 -> origin/mlazos/vary-beta2 2025-09-07T07:34:58.4413528Z * [new branch] mlazos/weird-perf1 -> origin/mlazos/weird-perf1 2025-09-07T07:34:58.4413756Z * [new branch] mm_out_dtype_compile -> origin/mm_out_dtype_compile 2025-09-07T07:34:58.4415626Z * [new branch] modify-setupvllm -> origin/modify-setupvllm 2025-09-07T07:34:58.4415950Z * [new branch] module-shim -> origin/module-shim 2025-09-07T07:34:58.4416221Z * [new branch] move-theme-out-docker -> origin/move-theme-out-docker 2025-09-07T07:34:58.4418721Z * [new branch] msaroufim/be1 -> origin/msaroufim/be1 2025-09-07T07:34:58.4419084Z * [new branch] msaroufim/cn_path -> origin/msaroufim/cn_path 2025-09-07T07:34:58.4419389Z * [new branch] msaroufim/dtensorfusedadam -> origin/msaroufim/dtensorfusedadam 2025-09-07T07:34:58.4419648Z * [new branch] msaroufim/reduce -> origin/msaroufim/reduce 2025-09-07T07:34:58.4419874Z * [new branch] mtia/basic-cmake -> origin/mtia/basic-cmake 2025-09-07T07:34:58.4423589Z * [new branch] muon_dev -> origin/muon_dev 2025-09-07T07:34:58.4423906Z * [new branch] muon_dev_1 -> origin/muon_dev_1 2025-09-07T07:34:58.4424103Z * [new branch] nativert_num_outputs -> origin/nativert_num_outputs 2025-09-07T07:34:58.4424526Z * [new branch] nativert_numoutputs -> origin/nativert_numoutputs 2025-09-07T07:34:58.4424705Z * [new branch] new-modifiy-setupvllm -> origin/new-modifiy-setupvllm 2025-09-07T07:34:58.4424845Z * [new branch] new-setupvllm -> origin/new-setupvllm 2025-09-07T07:34:58.4427183Z * [new branch] new_zeros_dtype -> origin/new_zeros_dtype 2025-09-07T07:34:58.4427549Z * [new branch] newtest-base -> origin/newtest-base 2025-09-07T07:34:58.4427799Z * [new branch] ngimel/cat_perf1 -> origin/ngimel/cat_perf1 2025-09-07T07:34:58.4427983Z * [new branch] ngimel/einsum_fix -> origin/ngimel/einsum_fix 2025-09-07T07:34:58.4428153Z * [new branch] ngimel/error_index_list -> origin/ngimel/error_index_list 2025-09-07T07:34:58.4430292Z * [new branch] ngimel/fabric_check -> origin/ngimel/fabric_check 2025-09-07T07:34:58.4430646Z * [new branch] ngimel/fabric_fix -> origin/ngimel/fabric_fix 2025-09-07T07:34:58.4430877Z * [new branch] ngimel/fix_driver_init_error -> origin/ngimel/fix_driver_init_error 2025-09-07T07:34:58.4431124Z * [new branch] ngimel/fix_nccl_segment_seg -> origin/ngimel/fix_nccl_segment_seg 2025-09-07T07:34:58.4431399Z * [new branch] ngimel/gg_new -> origin/ngimel/gg_new 2025-09-07T07:34:58.4435803Z * [new branch] ngimel/modeguard -> origin/ngimel/modeguard 2025-09-07T07:34:58.4436139Z * [new branch] ngimel/multicast_fix -> origin/ngimel/multicast_fix 2025-09-07T07:34:58.4436394Z * [new branch] ngimel/rocm_handle_type -> origin/ngimel/rocm_handle_type 2025-09-07T07:34:58.4436656Z * [new branch] ngimel/symm_handle_fabric -> origin/ngimel/symm_handle_fabric 2025-09-07T07:34:58.4436832Z * [new branch] ngimel/unbind_multimem -> origin/ngimel/unbind_multimem 2025-09-07T07:34:58.4436972Z * [new branch] nightly -> origin/nightly 2025-09-07T07:34:58.4437652Z * [new branch] nmacchioni-patch-10 -> origin/nmacchioni-patch-10 2025-09-07T07:34:58.4437998Z * [new branch] nmacchioni-patch-7 -> origin/nmacchioni-patch-7 2025-09-07T07:34:58.4438211Z * [new branch] nmacchioni-patch-8 -> origin/nmacchioni-patch-8 2025-09-07T07:34:58.4438580Z * [new branch] nmacchioni-patch-9 -> origin/nmacchioni-patch-9 2025-09-07T07:34:58.4439933Z * [new branch] nullplay/fuse_matmul -> origin/nullplay/fuse_matmul 2025-09-07T07:34:58.4440454Z * [new branch] nullplay_fuse_matmul -> origin/nullplay_fuse_matmul 2025-09-07T07:34:58.4441354Z * [new branch] one-off -> origin/one-off 2025-09-07T07:34:58.4443080Z * [new branch] orig/release/1.10 -> origin/orig/release/1.10 2025-09-07T07:34:58.4443235Z * [new branch] orig/release/1.11 -> origin/orig/release/1.11 2025-09-07T07:34:58.4445365Z * [new branch] orig/release/1.12 -> origin/orig/release/1.12 2025-09-07T07:34:58.4445562Z * [new branch] orig/release/1.13 -> origin/orig/release/1.13 2025-09-07T07:34:58.4445732Z * [new branch] orig/release/1.6 -> origin/orig/release/1.6 2025-09-07T07:34:58.4446879Z * [new branch] orig/release/1.7 -> origin/orig/release/1.7 2025-09-07T07:34:58.4452838Z * [new branch] orig/release/1.8 -> origin/orig/release/1.8 2025-09-07T07:34:58.4455456Z * [new branch] orig/release/1.9 -> origin/orig/release/1.9 2025-09-07T07:34:58.4455606Z * [new branch] orig/release/2.0 -> origin/orig/release/2.0 2025-09-07T07:34:58.4455755Z * [new branch] orig/release/2.1 -> origin/orig/release/2.1 2025-09-07T07:34:58.4456105Z * [new branch] orig/release/2.2 -> origin/orig/release/2.2 2025-09-07T07:34:58.4456253Z * [new branch] orig/release/2.3 -> origin/orig/release/2.3 2025-09-07T07:34:58.4456401Z * [new branch] orig/release/2.4 -> origin/orig/release/2.4 2025-09-07T07:34:58.4456546Z * [new branch] orig/release/2.5 -> origin/orig/release/2.5 2025-09-07T07:34:58.4456688Z * [new branch] orig/release/2.6 -> origin/orig/release/2.6 2025-09-07T07:34:58.4456824Z * [new branch] orig/release/2.7 -> origin/orig/release/2.7 2025-09-07T07:34:58.4456998Z * [new branch] orig/release/2.8 -> origin/orig/release/2.8 2025-09-07T07:34:58.4457141Z * [new branch] oulgen/fx_graph -> origin/oulgen/fx_graph 2025-09-07T07:34:58.4457296Z * [new branch] padded-tensor -> origin/padded-tensor 2025-09-07T07:34:58.4457424Z * [new branch] pca2 -> origin/pca2 2025-09-07T07:34:58.4457874Z * [new branch] pianpwk-patch-1 -> origin/pianpwk-patch-1 2025-09-07T07:34:58.4462423Z * [new branch] pianpwk/backed_size_oblivious_export -> origin/pianpwk/backed_size_oblivious_export 2025-09-07T07:34:58.4462696Z * [new branch] pianpwk/invalidate_fake_memo -> origin/pianpwk/invalidate_fake_memo 2025-09-07T07:34:58.4462866Z * [new branch] pianpwk/max_1_strides -> origin/pianpwk/max_1_strides 2025-09-07T07:34:58.4463017Z * [new branch] pianpwk/maybe_guard_rel -> origin/pianpwk/maybe_guard_rel 2025-09-07T07:34:58.4463170Z * [new branch] pianpwk/nonzero_memo -> origin/pianpwk/nonzero_memo 2025-09-07T07:34:58.4463377Z * [new branch] pianpwk/oblivious_reshape_view_better -> origin/pianpwk/oblivious_reshape_view_better 2025-09-07T07:34:58.4463559Z * [new branch] pianpwk/oblivious_slice_forward -> origin/pianpwk/oblivious_slice_forward 2025-09-07T07:34:58.4468137Z * [new branch] pianpwk/oblivious_where -> origin/pianpwk/oblivious_where 2025-09-07T07:34:58.4468721Z * [new branch] pianpwk/param_static_pgo -> origin/pianpwk/param_static_pgo 2025-09-07T07:34:58.4468949Z * [new branch] pianpwk/pre_forward_hook -> origin/pianpwk/pre_forward_hook 2025-09-07T07:34:58.4469161Z * [new branch] pianpwk/remove_guard_fail_break -> origin/pianpwk/remove_guard_fail_break 2025-09-07T07:34:58.4469341Z * [new branch] pianpwk/slice_fresh_symbols -> origin/pianpwk/slice_fresh_symbols 2025-09-07T07:34:58.4469518Z * [new branch] pianpwk/sym_tokens_draft -> origin/pianpwk/sym_tokens_draft 2025-09-07T07:34:58.4469745Z * [new branch] pianpwk/test_pointwise_guard_or_false -> origin/pianpwk/test_pointwise_guard_or_false 2025-09-07T07:34:58.4469964Z * [new branch] pianpwk/test_slice_fake_impl -> origin/pianpwk/test_slice_fake_impl 2025-09-07T07:34:58.4470154Z * [new branch] pianpwk/totally_draft_sym_wrap -> origin/pianpwk/totally_draft_sym_wrap 2025-09-07T07:34:58.4470351Z * [new branch] pianpwk/unbacked_channels_last -> origin/pianpwk/unbacked_channels_last 2025-09-07T07:34:58.4470537Z * [new branch] pianpwk/unbacked_safe_conv1d -> origin/pianpwk/unbacked_safe_conv1d 2025-09-07T07:34:58.4470723Z * [new branch] pianpwk/unbacked_sdpa_flash -> origin/pianpwk/unbacked_sdpa_flash 2025-09-07T07:34:58.4470940Z * [new branch] pianpwk/unbacked_should_swap -> origin/pianpwk/unbacked_should_swap 2025-09-07T07:34:58.4472425Z * [new branch] pianpwk/unbacked_should_swap_2 -> origin/pianpwk/unbacked_should_swap_2 2025-09-07T07:34:58.4472670Z * [new branch] pianpwk/unbacked_slice_binding -> origin/pianpwk/unbacked_slice_binding 2025-09-07T07:34:58.4473583Z * [new branch] pianpwk/unbacked_slice_forward -> origin/pianpwk/unbacked_slice_forward 2025-09-07T07:34:58.4473772Z * [new branch] pianpwk/user_symints -> origin/pianpwk/user_symints 2025-09-07T07:34:58.4474287Z * [new branch] pianpwk/wan21_reshape -> origin/pianpwk/wan21_reshape 2025-09-07T07:34:58.4475308Z * [new branch] pianpwk/whitelist_optimizer -> origin/pianpwk/whitelist_optimizer 2025-09-07T07:34:58.4475785Z * [new branch] pin-torchao -> origin/pin-torchao 2025-09-07T07:34:58.4477005Z * [new branch] piz/fall_back_missing_0716 -> origin/piz/fall_back_missing_0716 2025-09-07T07:34:58.4477292Z * [new branch] piz/improve_scatter_0808 -> origin/piz/improve_scatter_0808 2025-09-07T07:34:58.4478376Z * [new branch] pool-separate -> origin/pool-separate 2025-09-07T07:34:58.4478789Z * [new branch] pr-156087 -> origin/pr-156087 2025-09-07T07:34:58.4480104Z * [new branch] pr/131860 -> origin/pr/131860 2025-09-07T07:34:58.4480580Z * [new branch] predispatch_to -> origin/predispatch_to 2025-09-07T07:34:58.4481596Z * [new branch] pt-opt-cuda3 -> origin/pt-opt-cuda3 2025-09-07T07:34:58.4482266Z * [new branch] pyobjectslot -> origin/pyobjectslot 2025-09-07T07:34:58.4483522Z * [new branch] python_compiled_autograd -> origin/python_compiled_autograd 2025-09-07T07:34:58.4484805Z * [new branch] qchip/export-D54134695 -> origin/qchip/export-D54134695 2025-09-07T07:34:58.4485205Z * [new branch] quint-bits -> origin/quint-bits 2025-09-07T07:34:58.4487245Z * [new branch] release/1.10 -> origin/release/1.10 2025-09-07T07:34:58.4487526Z * [new branch] release/1.11 -> origin/release/1.11 2025-09-07T07:34:58.4487916Z * [new branch] release/1.12 -> origin/release/1.12 2025-09-07T07:34:58.4492053Z * [new branch] release/1.13 -> origin/release/1.13 2025-09-07T07:34:58.4492472Z * [new branch] release/1.4 -> origin/release/1.4 2025-09-07T07:34:58.4492612Z * [new branch] release/1.4.1 -> origin/release/1.4.1 2025-09-07T07:34:58.4492767Z * [new branch] release/1.5 -> origin/release/1.5 2025-09-07T07:34:58.4492890Z * [new branch] release/1.6 -> origin/release/1.6 2025-09-07T07:34:58.4493022Z * [new branch] release/1.7 -> origin/release/1.7 2025-09-07T07:34:58.4493166Z * [new branch] release/1.8 -> origin/release/1.8 2025-09-07T07:34:58.4494545Z * [new branch] release/1.9 -> origin/release/1.9 2025-09-07T07:34:58.4494891Z * [new branch] release/2.0 -> origin/release/2.0 2025-09-07T07:34:58.4495348Z * [new branch] release/2.1 -> origin/release/2.1 2025-09-07T07:34:58.4497699Z * [new branch] release/2.2 -> origin/release/2.2 2025-09-07T07:34:58.4498034Z * [new branch] release/2.3 -> origin/release/2.3 2025-09-07T07:34:58.4498202Z * [new branch] release/2.4 -> origin/release/2.4 2025-09-07T07:34:58.4499635Z * [new branch] release/2.5 -> origin/release/2.5 2025-09-07T07:34:58.4499964Z * [new branch] release/2.6 -> origin/release/2.6 2025-09-07T07:34:58.4500446Z * [new branch] release/2.7 -> origin/release/2.7 2025-09-07T07:34:58.4502377Z * [new branch] release/2.8 -> origin/release/2.8 2025-09-07T07:34:58.4502691Z * [new branch] release_notes -> origin/release_notes 2025-09-07T07:34:58.4503151Z * [new branch] remove-actionable-label -> origin/remove-actionable-label 2025-09-07T07:34:58.4503313Z * [new branch] remove-ao -> origin/remove-ao 2025-09-07T07:34:58.4508970Z * [new branch] removedeprecatedvllmtest -> origin/removedeprecatedvllmtest 2025-09-07T07:34:58.4509393Z * [new branch] replace-pytorch-labs-20250812-195836 -> origin/replace-pytorch-labs-20250812-195836 2025-09-07T07:34:58.4509745Z * [new branch] replace-pytorch-labs-20250812-200248 -> origin/replace-pytorch-labs-20250812-200248 2025-09-07T07:34:58.4509971Z * [new branch] replace-pytorch-labs-20250812-200324 -> origin/replace-pytorch-labs-20250812-200324 2025-09-07T07:34:58.4510184Z * [new branch] replace-pytorch-labs-20250812-204020 -> origin/replace-pytorch-labs-20250812-204020 2025-09-07T07:34:58.4510502Z * [new branch] replace-pytorch-labs-20250812-204125 -> origin/replace-pytorch-labs-20250812-204125 2025-09-07T07:34:58.4510719Z * [new branch] replace-pytorch-labs-20250812-205624 -> origin/replace-pytorch-labs-20250812-205624 2025-09-07T07:34:58.4510949Z * [new branch] revert-131069-gh/krzysztofjordan/1/head -> origin/revert-131069-gh/krzysztofjordan/1/head 2025-09-07T07:34:58.4511403Z * [new branch] revert-131469-gh/andrewor14/51/head -> origin/revert-131469-gh/andrewor14/51/head 2025-09-07T07:34:58.4519494Z * [new branch] revert-156870-gh/skarjala/3/head -> origin/revert-156870-gh/skarjala/3/head 2025-09-07T07:34:58.4519858Z * [new branch] revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ -> origin/revert-157914-cherry-pick-157503-by-pytorch_bot_bot_ 2025-09-07T07:34:58.4520018Z * [new branch] rocm-monitoring -> origin/rocm-monitoring 2025-09-07T07:34:58.4520171Z * [new branch] ruisi/relax_memory -> origin/ruisi/relax_memory 2025-09-07T07:34:58.4520384Z * [new branch] run-torchbench-smoke-test-h100 -> origin/run-torchbench-smoke-test-h100 2025-09-07T07:34:58.4520643Z * [new branch] ryanguo99/cleanup-dynamo-expected-failures -> origin/ryanguo99/cleanup-dynamo-expected-failures 2025-09-07T07:34:58.4520804Z * [new branch] ryanguo99/fix-closure-var -> origin/ryanguo99/fix-closure-var 2025-09-07T07:34:58.4520961Z * [new branch] rzou/faketensor_bench -> origin/rzou/faketensor_bench 2025-09-07T07:34:58.4521081Z * [new branch] rzou/njt -> origin/rzou/njt 2025-09-07T07:34:58.4521386Z * [new branch] rzou/pca -> origin/rzou/pca 2025-09-07T07:34:58.4521540Z * [new branch] rzou/realprop -> origin/rzou/realprop 2025-09-07T07:34:58.4521767Z * [new branch] rzou/setup_context -> origin/rzou/setup_context 2025-09-07T07:34:58.4523068Z * [new branch] sanchitintel/refactor_aten_int8_woq_gemm -> origin/sanchitintel/refactor_aten_int8_woq_gemm 2025-09-07T07:34:58.4523416Z * [new branch] sanchitintel/weird_thing_with_test_cpu_select_algorithm -> origin/sanchitintel/weird_thing_with_test_cpu_select_algorithm 2025-09-07T07:34:58.4524141Z * [new branch] sapling-pr-archive-SS-JIA -> origin/sapling-pr-archive-SS-JIA 2025-09-07T07:34:58.4524742Z * [new branch] save -> origin/save 2025-09-07T07:34:58.4525993Z * [new branch] sdym/2.5.1 -> origin/sdym/2.5.1 2025-09-07T07:34:58.4526442Z * [new branch] seemethere-patch-1 -> origin/seemethere-patch-1 2025-09-07T07:34:58.4527573Z * [new branch] setupvllm -> origin/setupvllm 2025-09-07T07:34:58.4527883Z * [new branch] share_and_pin_fork -> origin/share_and_pin_fork 2025-09-07T07:34:58.4529562Z * [new branch] shengf/fx-xform-perf -> origin/shengf/fx-xform-perf 2025-09-07T07:34:58.4530015Z * [new branch] shikaili_fp8_allgather -> origin/shikaili_fp8_allgather 2025-09-07T07:34:58.4531078Z * [new branch] shoumikhin-patch-1 -> origin/shoumikhin-patch-1 2025-09-07T07:34:58.4531441Z * [new branch] shoumikhin-patch-12 -> origin/shoumikhin-patch-12 2025-09-07T07:34:58.4532592Z * [new branch] simplify-fq-per-channel -> origin/simplify-fq-per-channel 2025-09-07T07:34:58.4533037Z * [new branch] solve-accuracy-fix -> origin/solve-accuracy-fix 2025-09-07T07:34:58.4534168Z * [new branch] soulitzer/stash-tls-ac -> origin/soulitzer/stash-tls-ac 2025-09-07T07:34:58.4535145Z * [new branch] sqzhang/flight4 -> origin/sqzhang/flight4 2025-09-07T07:34:58.4535455Z * [new branch] sqzhang/flight4plus -> origin/sqzhang/flight4plus 2025-09-07T07:34:58.4536786Z * [new branch] sraikund/record_funct_test -> origin/sraikund/record_funct_test 2025-09-07T07:34:58.4537501Z * [new branch] sraikund16/test -> origin/sraikund16/test 2025-09-07T07:34:58.4538687Z * [new branch] stablize-compilation-time -> origin/stablize-compilation-time 2025-09-07T07:34:58.4538845Z * [new branch] standalone-templates -> origin/standalone-templates 2025-09-07T07:34:58.4539876Z * [new branch] standalone_package_weights -> origin/standalone_package_weights 2025-09-07T07:34:58.4540052Z * [new branch] starterTaskUpdate -> origin/starterTaskUpdate 2025-09-07T07:34:58.4543583Z * [new branch] subgraph_fuse -> origin/subgraph_fuse 2025-09-07T07:34:58.4543754Z * [new branch] support-uv-in-collect_env -> origin/support-uv-in-collect_env 2025-09-07T07:34:58.4543883Z * [new branch] sve-poc -> origin/sve-poc 2025-09-07T07:34:58.4544020Z * [new branch] svekars-patch-1 -> origin/svekars-patch-1 2025-09-07T07:34:58.4544150Z * [new branch] switch-bn -> origin/switch-bn 2025-09-07T07:34:58.4547245Z * [new branch] sympy-bottleneck-repro -> origin/sympy-bottleneck-repro 2025-09-07T07:34:58.4547452Z * [new branch] tenpercent/ck_rocm_ci_v3 -> origin/tenpercent/ck_rocm_ci_v3 2025-09-07T07:34:58.4547642Z * [new branch] tensordict_integration -> origin/tensordict_integration 2025-09-07T07:34:58.4547769Z * [new branch] test-7054 -> origin/test-7054 2025-09-07T07:34:58.4547936Z * [new branch] test-move-conda-builds -> origin/test-move-conda-builds 2025-09-07T07:34:58.4548170Z * [new branch] test-myst-markdown-docstring -> origin/test-myst-markdown-docstring 2025-09-07T07:34:58.4549012Z * [new branch] test-old -> origin/test-old 2025-09-07T07:34:58.4549531Z * [new branch] test-vec-migration-internally -> origin/test-vec-migration-internally 2025-09-07T07:34:58.4550657Z * [new branch] test/bmm_heur -> origin/test/bmm_heur 2025-09-07T07:34:58.4551109Z * [new branch] test/inductor -> origin/test/inductor 2025-09-07T07:34:58.4552591Z * [new branch] tianren/flex_paged_attn_fix -> origin/tianren/flex_paged_attn_fix 2025-09-07T07:34:58.4552796Z * [new branch] tianren/flex_paged_attn_fix_temp -> origin/tianren/flex_paged_attn_fix_temp 2025-09-07T07:34:58.4553762Z * [new branch] tianren/test -> origin/tianren/test 2025-09-07T07:34:58.4554075Z * [new branch] tidy_performance_cyy -> origin/tidy_performance_cyy 2025-09-07T07:34:58.4557215Z * [new branch] torchtitan_ep -> origin/torchtitan_ep 2025-09-07T07:34:58.4557409Z * [new branch] trace_fsdp_torchtune_lora -> origin/trace_fsdp_torchtune_lora 2025-09-07T07:34:58.4557820Z * [new branch] traceable_fsdp_unit_tests -> origin/traceable_fsdp_unit_tests 2025-09-07T07:34:58.4557973Z * [new branch] tree_loop_vec_base -> origin/tree_loop_vec_base 2025-09-07T07:34:58.4558104Z * [new branch] tree_vec_base -> origin/tree_vec_base 2025-09-07T07:34:58.4558638Z * [new branch] triton-update -> origin/triton-update 2025-09-07T07:34:58.4559243Z * [new branch] triton_kernel -> origin/triton_kernel 2025-09-07T07:34:58.4559852Z * [new branch] triton_kernel_perf -> origin/triton_kernel_perf 2025-09-07T07:34:58.4560647Z * [new branch] tt_pkg_1908 -> origin/tt_pkg_1908 2025-09-07T07:34:58.4561201Z * [new branch] tweak-transformer-dependabot -> origin/tweak-transformer-dependabot 2025-09-07T07:34:58.4562230Z * [new branch] type_dec -> origin/type_dec 2025-09-07T07:34:58.4562780Z * [new branch] udate-sphinx-dependancies -> origin/udate-sphinx-dependancies 2025-09-07T07:34:58.4564149Z * [new branch] update-audio-commit-hash/16818882925-1712-1 -> origin/update-audio-commit-hash/16818882925-1712-1 2025-09-07T07:34:58.4564412Z * [new branch] update-audio-commit-hash/16895560422-1720-1 -> origin/update-audio-commit-hash/16895560422-1720-1 2025-09-07T07:34:58.4565226Z * [new branch] update-audio-commit-hash/16924174496-1738-1 -> origin/update-audio-commit-hash/16924174496-1738-1 2025-09-07T07:34:58.4565855Z * [new branch] update-audio-commit-hash/17002010821-1749-1 -> origin/update-audio-commit-hash/17002010821-1749-1 2025-09-07T07:34:58.4566531Z * [new branch] update-audio-commit-hash/17056004427-1766-1 -> origin/update-audio-commit-hash/17056004427-1766-1 2025-09-07T07:34:58.4567758Z * [new branch] update-audio-commit-hash/17085054029-1767-1 -> origin/update-audio-commit-hash/17085054029-1767-1 2025-09-07T07:34:58.4568418Z * [new branch] update-audio-commit-hash/17142507405-1771-1 -> origin/update-audio-commit-hash/17142507405-1771-1 2025-09-07T07:34:58.4569840Z * [new branch] update-audio-commit-hash/17168762740-1773-1 -> origin/update-audio-commit-hash/17168762740-1773-1 2025-09-07T07:34:58.4570105Z * [new branch] update-audio-commit-hash/17311174639-1780-1 -> origin/update-audio-commit-hash/17311174639-1780-1 2025-09-07T07:34:58.4570541Z * [new branch] update-audio-commit-hash/17336898740-1781-1 -> origin/update-audio-commit-hash/17336898740-1781-1 2025-09-07T07:34:58.4571084Z * [new branch] update-audio-commit-hash/17389727684-1786-1 -> origin/update-audio-commit-hash/17389727684-1786-1 2025-09-07T07:34:58.4571906Z * [new branch] update-audio-commit-hash/17449538142-1790-1 -> origin/update-audio-commit-hash/17449538142-1790-1 2025-09-07T07:34:58.4572493Z * [new branch] update-audio-commit-hash/17507351808-1794-1 -> origin/update-audio-commit-hash/17507351808-1794-1 2025-09-07T07:34:58.4574352Z * [new branch] update-dynamic-shapes-doc -> origin/update-dynamic-shapes-doc 2025-09-07T07:34:58.4574827Z * [new branch] update-executorch-commit-hash/15694981040-1626-1 -> origin/update-executorch-commit-hash/15694981040-1626-1 2025-09-07T07:34:58.4575245Z * [new branch] update-triton-commit-hash/13663274526-1487-2 -> origin/update-triton-commit-hash/13663274526-1487-2 2025-09-07T07:34:58.4578078Z * [new branch] update-vision-commit-hash/15336342773-1607-1 -> origin/update-vision-commit-hash/15336342773-1607-1 2025-09-07T07:34:58.4578503Z * [new branch] update-vllm-commit-hash/16737365217-1704-1 -> origin/update-vllm-commit-hash/16737365217-1704-1 2025-09-07T07:34:58.4578805Z * [new branch] update-vllm-commit-hash/16843157111-1713-1 -> origin/update-vllm-commit-hash/16843157111-1713-1 2025-09-07T07:34:58.4579284Z * [new branch] update-vllm-commit-hash/16855312394-1714-1 -> origin/update-vllm-commit-hash/16855312394-1714-1 2025-09-07T07:34:58.4581860Z * [new branch] update-vllm-commit-hash/16924174496-1738-1 -> origin/update-vllm-commit-hash/16924174496-1738-1 2025-09-07T07:34:58.4582342Z * [new branch] update-vllm-commit-hash/16952608705-1745-1 -> origin/update-vllm-commit-hash/16952608705-1745-1 2025-09-07T07:34:58.4582642Z * [new branch] update-vllm-commit-hash/16979836546-1748-1 -> origin/update-vllm-commit-hash/16979836546-1748-1 2025-09-07T07:34:58.4582862Z * [new branch] update-vllm-commit-hash/17014576881-1756-1 -> origin/update-vllm-commit-hash/17014576881-1756-1 2025-09-07T07:34:58.4583082Z * [new branch] update-vllm-commit-hash/17027830869-1761-1 -> origin/update-vllm-commit-hash/17027830869-1761-1 2025-09-07T07:34:58.4583293Z * [new branch] update-vllm-commit-hash/17056004427-1766-1 -> origin/update-vllm-commit-hash/17056004427-1766-1 2025-09-07T07:34:58.4586440Z * [new branch] update-vllm-commit-hash/17085054029-1767-1 -> origin/update-vllm-commit-hash/17085054029-1767-1 2025-09-07T07:34:58.4586662Z * [new branch] update-vllm-commit-hash/17113610216-1768-1 -> origin/update-vllm-commit-hash/17113610216-1768-1 2025-09-07T07:34:58.4587016Z * [new branch] update-vllm-commit-hash/17142507405-1771-1 -> origin/update-vllm-commit-hash/17142507405-1771-1 2025-09-07T07:34:58.4587231Z * [new branch] update-vllm-commit-hash/17181878974-1774-1 -> origin/update-vllm-commit-hash/17181878974-1774-1 2025-09-07T07:34:58.4587439Z * [new branch] update-vllm-commit-hash/17311174639-1780-1 -> origin/update-vllm-commit-hash/17311174639-1780-1 2025-09-07T07:34:58.4587664Z * [new branch] update-vllm-commit-hash/17336898740-1781-1 -> origin/update-vllm-commit-hash/17336898740-1781-1 2025-09-07T07:34:58.4587878Z * [new branch] update-vllm-commit-hash/17364352302-1785-1 -> origin/update-vllm-commit-hash/17364352302-1785-1 2025-09-07T07:34:58.4591557Z * [new branch] update-vllm-commit-hash/17389727684-1786-1 -> origin/update-vllm-commit-hash/17389727684-1786-1 2025-09-07T07:34:58.4591783Z * [new branch] update-vllm-commit-hash/17449538142-1790-1 -> origin/update-vllm-commit-hash/17449538142-1790-1 2025-09-07T07:34:58.4592008Z * [new branch] update-vllm-commit-hash/17480069797-1791-1 -> origin/update-vllm-commit-hash/17480069797-1791-1 2025-09-07T07:34:58.4592219Z * [new branch] update-vllm-commit-hash/17507351808-1794-1 -> origin/update-vllm-commit-hash/17507351808-1794-1 2025-09-07T07:34:58.4592442Z * [new branch] update-xla-commit-hash/16873912760-198-1 -> origin/update-xla-commit-hash/16873912760-198-1 2025-09-07T07:34:58.4592651Z * [new branch] update-xla-commit-hash/17034266655-199-1 -> origin/update-xla-commit-hash/17034266655-199-1 2025-09-07T07:34:58.4592868Z * [new branch] update-xla-commit-hash/17202464405-200-1 -> origin/update-xla-commit-hash/17202464405-200-1 2025-09-07T07:34:58.4595095Z * [new branch] update_docs_torch_multinomial_issue#125388 -> origin/update_docs_torch_multinomial_issue#125388 2025-09-07T07:34:58.4595634Z * [new branch] update_executorch_pin -> origin/update_executorch_pin 2025-09-07T07:34:58.4595888Z * [new branch] update_slow_tests_1722488736 -> origin/update_slow_tests_1722488736 2025-09-07T07:34:58.4596071Z * [new branch] update_slow_tests_1722879173 -> origin/update_slow_tests_1722879173 2025-09-07T07:34:58.4596249Z * [new branch] update_slow_tests_1752478971 -> origin/update_slow_tests_1752478971 2025-09-07T07:34:58.4596418Z * [new branch] update_slow_tests_1755502951 -> origin/update_slow_tests_1755502951 2025-09-07T07:34:58.4600044Z * [new branch] update_slow_tests_1756107664 -> origin/update_slow_tests_1756107664 2025-09-07T07:34:58.4600408Z * [new branch] update_submodule_FBGEMM -> origin/update_submodule_FBGEMM 2025-09-07T07:34:58.4600586Z * [new branch] update_submodule_kineto -> origin/update_submodule_kineto 2025-09-07T07:34:58.4600771Z * [new branch] update_submodule_tensorpipe -> origin/update_submodule_tensorpipe 2025-09-07T07:34:58.4600910Z * [new branch] v0.1.2 -> origin/v0.1.2 2025-09-07T07:34:58.4601043Z * [new branch] v1.0.1 -> origin/v1.0.1 2025-09-07T07:34:58.4604693Z * [new branch] v1.0.3 -> origin/v1.0.3 2025-09-07T07:34:58.4604854Z * [new branch] v1.1.0 -> origin/v1.1.0 2025-09-07T07:34:58.4604981Z * [new branch] v1.2.0 -> origin/v1.2.0 2025-09-07T07:34:58.4605101Z * [new branch] v1.3.0 -> origin/v1.3.0 2025-09-07T07:34:58.4605256Z * [new branch] v1.3.1 -> origin/v1.3.1 2025-09-07T07:34:58.4605726Z * [new branch] validate_fn -> origin/validate_fn 2025-09-07T07:34:58.4606908Z * [new branch] validations_2.6 -> origin/validations_2.6 2025-09-07T07:34:58.4613284Z * [new branch] validations_2.8 -> origin/validations_2.8 2025-09-07T07:34:58.4617474Z * [new branch] viable/strict -> origin/viable/strict 2025-09-07T07:34:58.4619514Z * [new branch] vllmbuildci -> origin/vllmbuildci 2025-09-07T07:34:58.4622523Z * [new branch] vllmpin -> origin/vllmpin 2025-09-07T07:34:58.4622841Z * [new branch] wdvr/conda_devcontainer -> origin/wdvr/conda_devcontainer 2025-09-07T07:34:58.4629052Z * [new branch] wdvr/iss_145259 -> origin/wdvr/iss_145259 2025-09-07T07:34:58.4630919Z * [new branch] weight_sharing_cpp -> origin/weight_sharing_cpp 2025-09-07T07:34:58.4631089Z * [new branch] whc/flight4 -> origin/whc/flight4 2025-09-07T07:34:58.4631229Z * [new branch] whc/flight51 -> origin/whc/flight51 2025-09-07T07:34:58.4631351Z * [new branch] whc/flight53 -> origin/whc/flight53 2025-09-07T07:34:58.4631474Z * [new branch] whc/stage2 -> origin/whc/stage2 2025-09-07T07:34:58.4631606Z * [new branch] whc/uneven -> origin/whc/uneven 2025-09-07T07:34:58.4631746Z * [new branch] whc/uneven-merge -> origin/whc/uneven-merge 2025-09-07T07:34:58.4631877Z * [new branch] win_warnings -> origin/win_warnings 2025-09-07T07:34:58.4632023Z * [new branch] windows_libtorch_free -> origin/windows_libtorch_free 2025-09-07T07:34:58.4632164Z * [new branch] workonoldcommit -> origin/workonoldcommit 2025-09-07T07:34:58.4632452Z * [new branch] wychi-autotune-prune-configs-by-shared-mem -> origin/wychi-autotune-prune-configs-by-shared-mem 2025-09-07T07:34:58.4632576Z * [new branch] xmfan/ca_0516 -> origin/xmfan/ca_0516 2025-09-07T07:34:58.4632719Z * [new branch] xmfan/ca_1051b93192 -> origin/xmfan/ca_1051b93192 2025-09-07T07:34:58.4632983Z * [new branch] xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 -> origin/xmfan/ca_1a722f62c248391fc4a542e8851a5559aa356ae8 2025-09-07T07:34:58.4633125Z * [new branch] xmfan/ca_5a2be192d1 -> origin/xmfan/ca_5a2be192d1 2025-09-07T07:34:58.4633257Z * [new branch] xmfan/ca_9d59b516e9 -> origin/xmfan/ca_9d59b516e9 2025-09-07T07:34:58.4633379Z * [new branch] xmfan/ca_api -> origin/xmfan/ca_api 2025-09-07T07:34:58.4633506Z * [new branch] xmfan/ca_apr8 -> origin/xmfan/ca_apr8 2025-09-07T07:34:58.4633768Z * [new branch] xmfan/ca_base -> origin/xmfan/ca_base 2025-09-07T07:34:58.4633910Z * [new branch] xmfan/ca_cudagraphs -> origin/xmfan/ca_cudagraphs 2025-09-07T07:34:58.4634053Z * [new branch] xmfan/ca_dynamic -> origin/xmfan/ca_dynamic 2025-09-07T07:34:58.4634362Z * [new branch] xmfan/ca_fix_dyn -> origin/xmfan/ca_fix_dyn 2025-09-07T07:34:58.4634579Z * [new branch] xmfan/ca_fix_lowering -> origin/xmfan/ca_fix_lowering 2025-09-07T07:34:58.4634810Z * [new branch] xmfan/ca_fix_polyfills -> origin/xmfan/ca_fix_polyfills 2025-09-07T07:34:58.4635066Z * [new branch] xmfan/ca_jan3 -> origin/xmfan/ca_jan3 2025-09-07T07:34:58.4635213Z * [new branch] xmfan/ca_jun18 -> origin/xmfan/ca_jun18 2025-09-07T07:34:58.4635351Z * [new branch] xmfan/ca_jun24 -> origin/xmfan/ca_jun24 2025-09-07T07:34:58.4635561Z * [new branch] xmfan/ca_mem_base -> origin/xmfan/ca_mem_base 2025-09-07T07:34:58.4641754Z * [new branch] xmfan/ca_mem_fix -> origin/xmfan/ca_mem_fix 2025-09-07T07:34:58.4641944Z * [new branch] xmfan/ca_memory_fix -> origin/xmfan/ca_memory_fix 2025-09-07T07:34:58.4642174Z * [new branch] xmfan/ca_memory_fix_rebased -> origin/xmfan/ca_memory_fix_rebased 2025-09-07T07:34:58.4642511Z * [new branch] xmfan/ca_memory_fix_rebased2 -> origin/xmfan/ca_memory_fix_rebased2 2025-09-07T07:34:58.4642706Z * [new branch] xmfan/ca_move_to_cuda -> origin/xmfan/ca_move_to_cuda 2025-09-07T07:34:58.4642851Z * [new branch] xmfan/ca_nested -> origin/xmfan/ca_nested 2025-09-07T07:34:58.4643004Z * [new branch] xmfan/ca_overhead -> origin/xmfan/ca_overhead 2025-09-07T07:34:58.4643201Z * [new branch] xmfan/ca_overhead_0eba7e5451 -> origin/xmfan/ca_overhead_0eba7e5451 2025-09-07T07:34:58.4643348Z * [new branch] xmfan/ca_scalar -> origin/xmfan/ca_scalar 2025-09-07T07:34:58.4643532Z * [new branch] xmfan/ca_subclass_mem_fix -> origin/xmfan/ca_subclass_mem_fix 2025-09-07T07:34:58.4643679Z * [new branch] xmfan/ca_warm_mem -> origin/xmfan/ca_warm_mem 2025-09-07T07:34:58.4643838Z * [new branch] xmfan/ca_warm_mem_base -> origin/xmfan/ca_warm_mem_base 2025-09-07T07:34:58.4643985Z * [new branch] xmfan/cacu_jun18 -> origin/xmfan/cacu_jun18 2025-09-07T07:34:58.4644144Z * [new branch] xmfan/cacu_jun19 -> origin/xmfan/cacu_jun19 2025-09-07T07:34:58.4644313Z * [new branch] xmfan/cacu_jun4 -> origin/xmfan/cacu_jun4 2025-09-07T07:34:58.4644444Z * [new branch] xmfan/cacu_may27 -> origin/xmfan/cacu_may27 2025-09-07T07:34:58.4644620Z * [new branch] xmfan/disable_duck_shape -> origin/xmfan/disable_duck_shape 2025-09-07T07:34:58.4644816Z * [new branch] xmfan/fca_cpp_node_passthrough -> origin/xmfan/fca_cpp_node_passthrough 2025-09-07T07:34:58.4644965Z * [new branch] xmfan/issue_123374 -> origin/xmfan/issue_123374 2025-09-07T07:34:58.4645417Z * [new branch] xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/post_3945954741e2d37023c5d6954f9483008e0892f9 2025-09-07T07:34:58.4645708Z * [new branch] xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 -> origin/xmfan/pre_3945954741e2d37023c5d6954f9483008e0892f9 2025-09-07T07:34:58.4651336Z * [new branch] xmfan/segfault_test -> origin/xmfan/segfault_test 2025-09-07T07:34:58.4651717Z * [new branch] xmfan/single_step -> origin/xmfan/single_step 2025-09-07T07:34:58.4652636Z * [new branch] xmfan/sth_0829 -> origin/xmfan/sth_0829 2025-09-07T07:34:58.4653072Z * [new branch] xmfan/test -> origin/xmfan/test 2025-09-07T07:34:58.4657904Z * [new branch] yguo/debug-0226-constexpr -> origin/yguo/debug-0226-constexpr 2025-09-07T07:34:58.4663580Z * [new branch] yguo/new_latest_changes -> origin/yguo/new_latest_changes 2025-09-07T07:34:58.4668430Z * [new branch] yguo/patch_constexpr_changes -> origin/yguo/patch_constexpr_changes 2025-09-07T07:34:58.4672599Z * [new branch] yihan_quantization -> origin/yihan_quantization 2025-09-07T07:34:58.4678279Z * [new branch] yiming/add_jit_trace_benchmark -> origin/yiming/add_jit_trace_benchmark 2025-09-07T07:34:58.4679991Z * [new branch] yiming/add_nativert_benchmark -> origin/yiming/add_nativert_benchmark 2025-09-07T07:34:58.4680143Z * [new branch] yiming/bootcamp -> origin/yiming/bootcamp 2025-09-07T07:34:58.4680566Z * [new branch] zainr/canary-test -> origin/zainr/canary-test 2025-09-07T07:34:58.4680815Z * [new branch] zainr/cleanup-gh-runners -> origin/zainr/cleanup-gh-runners 2025-09-07T07:34:58.4680968Z * [new branch] zainr/git-push-v2 -> origin/zainr/git-push-v2 2025-09-07T07:34:58.4681138Z * [new branch] zainr/pull-migration-c -> origin/zainr/pull-migration-c 2025-09-07T07:34:58.4681263Z * [new branch] zainr/test -> origin/zainr/test 2025-09-07T07:34:58.4681582Z * [new branch] zainr/test2 -> origin/zainr/test2 2025-09-07T07:34:58.4681720Z * [new branch] zainr/unstable -> origin/zainr/unstable 2025-09-07T07:34:58.4681863Z * [new branch] zainr/unstable-xla -> origin/zainr/unstable-xla 2025-09-07T07:34:58.4682018Z * [new branch] zasdfgbnm-patch-3 -> origin/zasdfgbnm-patch-3 2025-09-07T07:34:58.4682132Z * [new branch] zb2p -> origin/zb2p 2025-09-07T07:34:58.4682298Z * [new branch] zero_grad_optimization -> origin/zero_grad_optimization 2025-09-07T07:34:58.4682468Z * [new branch] zeros-and-scatter-part2 -> origin/zeros-and-scatter-part2 2025-09-07T07:34:58.4682629Z * [new branch] zhxchen17/scratch/0 -> origin/zhxchen17/scratch/0 2025-09-07T07:34:58.4682787Z * [new branch] zhxhcen17/moodycamel -> origin/zhxhcen17/moodycamel 2025-09-07T07:34:58.4682917Z * [new branch] zxiiro/main -> origin/zxiiro/main 2025-09-07T07:34:58.4683254Z * [new tag] bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug -> bc2caa7fdf006894eff7af936babde69ab5a40f8-huydhn-debug 2025-09-07T07:34:58.4683379Z * [new tag] ci/binaries/77164 -> ci/binaries/77164 2025-09-07T07:34:58.4683521Z * [new tag] ciflow/binaries/156049 -> ciflow/binaries/156049 2025-09-07T07:34:58.4683652Z * [new tag] ciflow/binaries/156712 -> ciflow/binaries/156712 2025-09-07T07:34:58.4683788Z * [new tag] ciflow/binaries/157432 -> ciflow/binaries/157432 2025-09-07T07:34:58.4683927Z * [new tag] ciflow/binaries/157685 -> ciflow/binaries/157685 2025-09-07T07:34:58.4684058Z * [new tag] ciflow/binaries/157689 -> ciflow/binaries/157689 2025-09-07T07:34:58.4684191Z * [new tag] ciflow/binaries/158104 -> ciflow/binaries/158104 2025-09-07T07:34:58.4684326Z * [new tag] ciflow/binaries/160229 -> ciflow/binaries/160229 2025-09-07T07:34:58.4684456Z * [new tag] ciflow/binaries/160720 -> ciflow/binaries/160720 2025-09-07T07:34:58.4684586Z * [new tag] ciflow/binaries/162080 -> ciflow/binaries/162080 2025-09-07T07:34:58.4684716Z * [new tag] ciflow/binaries/162329 -> ciflow/binaries/162329 2025-09-07T07:34:58.4684895Z * [new tag] ciflow/binaries_libtorch/156049 -> ciflow/binaries_libtorch/156049 2025-09-07T07:34:58.4685110Z * [new tag] ciflow/binaries_libtorch/156711 -> ciflow/binaries_libtorch/156711 2025-09-07T07:34:58.4685281Z * [new tag] ciflow/binaries_libtorch/157432 -> ciflow/binaries_libtorch/157432 2025-09-07T07:34:58.4685428Z * [new tag] ciflow/binaries_wheel/156049 -> ciflow/binaries_wheel/156049 2025-09-07T07:34:58.4685580Z * [new tag] ciflow/binaries_wheel/156711 -> ciflow/binaries_wheel/156711 2025-09-07T07:34:58.4685724Z * [new tag] ciflow/binaries_wheel/157432 -> ciflow/binaries_wheel/157432 2025-09-07T07:34:58.4685880Z * [new tag] ciflow/binaries_wheel/162136 -> ciflow/binaries_wheel/162136 2025-09-07T07:34:58.4686025Z * [new tag] ciflow/binaries_wheel/162252 -> ciflow/binaries_wheel/162252 2025-09-07T07:34:58.4686182Z * [new tag] ciflow/binaries_wheel/162325 -> ciflow/binaries_wheel/162325 2025-09-07T07:34:58.4686364Z * [new tag] ciflow/h100-distributed/156703 -> ciflow/h100-distributed/156703 2025-09-07T07:34:58.4686507Z * [new tag] ciflow/h100-symm-mem/157635 -> ciflow/h100-symm-mem/157635 2025-09-07T07:34:58.4686658Z * [new tag] ciflow/h100-symm-mem/161984 -> ciflow/h100-symm-mem/161984 2025-09-07T07:34:58.4687067Z * [new tag] ciflow/h100-symm-mem/162003 -> ciflow/h100-symm-mem/162003 2025-09-07T07:34:58.4687260Z * [new tag] ciflow/h100-symm-mem/162011 -> ciflow/h100-symm-mem/162011 2025-09-07T07:34:58.4687405Z * [new tag] ciflow/h100-symm-mem/162026 -> ciflow/h100-symm-mem/162026 2025-09-07T07:34:58.4687537Z * [new tag] ciflow/h100-symm-mem/162033 -> ciflow/h100-symm-mem/162033 2025-09-07T07:34:58.4687676Z * [new tag] ciflow/h100-symm-mem/162040 -> ciflow/h100-symm-mem/162040 2025-09-07T07:34:58.4687834Z * [new tag] ciflow/h100-symm-mem/162041 -> ciflow/h100-symm-mem/162041 2025-09-07T07:34:58.4687979Z * [new tag] ciflow/h100-symm-mem/162142 -> ciflow/h100-symm-mem/162142 2025-09-07T07:34:58.4688111Z * [new tag] ciflow/h100-symm-mem/162150 -> ciflow/h100-symm-mem/162150 2025-09-07T07:34:58.4688236Z * [new tag] ciflow/h100-symm-mem/162243 -> ciflow/h100-symm-mem/162243 2025-09-07T07:34:58.4688366Z * [new tag] ciflow/h100-symm-mem/162320 -> ciflow/h100-symm-mem/162320 2025-09-07T07:34:58.4688487Z * [new tag] ciflow/h100/159158 -> ciflow/h100/159158 2025-09-07T07:34:58.4688603Z * [new tag] ciflow/h100/160480 -> ciflow/h100/160480 2025-09-07T07:34:58.4688908Z * [new tag] ciflow/h100/161749 -> ciflow/h100/161749 2025-09-07T07:34:58.4689638Z * [new tag] ciflow/h100/162022 -> ciflow/h100/162022 2025-09-07T07:34:58.4689767Z * [new tag] ciflow/h100/162278 -> ciflow/h100/162278 2025-09-07T07:34:58.4691975Z * [new tag] ciflow/inductor-perf-test-nightly-rocm/156592 -> ciflow/inductor-perf-test-nightly-rocm/156592 2025-09-07T07:34:58.4692228Z * [new tag] ciflow/inductor-perf-test-nightly/156592 -> ciflow/inductor-perf-test-nightly/156592 2025-09-07T07:34:58.4692399Z * [new tag] ciflow/inductor-periodic/162063 -> ciflow/inductor-periodic/162063 2025-09-07T07:34:58.4692600Z * [new tag] ciflow/inductor-periodic/162227 -> ciflow/inductor-periodic/162227 2025-09-07T07:34:58.4692802Z * [new tag] ciflow/inductor-periodic/162323 -> ciflow/inductor-periodic/162323 2025-09-07T07:34:58.4694765Z * [new tag] ciflow/inductor-rocm/154170 -> ciflow/inductor-rocm/154170 2025-09-07T07:34:58.4694938Z * [new tag] ciflow/inductor-rocm/159146 -> ciflow/inductor-rocm/159146 2025-09-07T07:34:58.4695073Z * [new tag] ciflow/inductor-rocm/159158 -> ciflow/inductor-rocm/159158 2025-09-07T07:34:58.4695448Z * [new tag] ciflow/inductor-rocm/161715 -> ciflow/inductor-rocm/161715 2025-09-07T07:34:58.4695649Z * [new tag] ciflow/inductor-rocm/162053 -> ciflow/inductor-rocm/162053 2025-09-07T07:34:58.4695996Z * [new tag] ciflow/inductor-rocm/162056 -> ciflow/inductor-rocm/162056 2025-09-07T07:34:58.4696758Z * [new tag] ciflow/inductor/137400 -> ciflow/inductor/137400 2025-09-07T07:34:58.4697001Z * [new tag] ciflow/inductor/148180 -> ciflow/inductor/148180 2025-09-07T07:34:58.4697917Z * [new tag] ciflow/inductor/148328 -> ciflow/inductor/148328 2025-09-07T07:34:58.4698202Z * [new tag] ciflow/inductor/148484 -> ciflow/inductor/148484 2025-09-07T07:34:58.4698674Z * [new tag] ciflow/inductor/148492 -> ciflow/inductor/148492 2025-09-07T07:34:58.4698912Z * [new tag] ciflow/inductor/152624 -> ciflow/inductor/152624 2025-09-07T07:34:58.4699317Z * [new tag] ciflow/inductor/154694 -> ciflow/inductor/154694 2025-09-07T07:34:58.4699993Z * [new tag] ciflow/inductor/156049 -> ciflow/inductor/156049 2025-09-07T07:34:58.4700297Z * [new tag] ciflow/inductor/156592 -> ciflow/inductor/156592 2025-09-07T07:34:58.4700593Z * [new tag] ciflow/inductor/157635 -> ciflow/inductor/157635 2025-09-07T07:34:58.4701069Z * [new tag] ciflow/inductor/157685 -> ciflow/inductor/157685 2025-09-07T07:34:58.4701628Z * [new tag] ciflow/inductor/157686 -> ciflow/inductor/157686 2025-09-07T07:34:58.4702159Z * [new tag] ciflow/inductor/157689 -> ciflow/inductor/157689 2025-09-07T07:34:58.4702583Z * [new tag] ciflow/inductor/157699 -> ciflow/inductor/157699 2025-09-07T07:34:58.4703473Z * [new tag] ciflow/inductor/157743 -> ciflow/inductor/157743 2025-09-07T07:34:58.4703720Z * [new tag] ciflow/inductor/157994 -> ciflow/inductor/157994 2025-09-07T07:34:58.4704239Z * [new tag] ciflow/inductor/158091 -> ciflow/inductor/158091 2025-09-07T07:34:58.4704573Z * [new tag] ciflow/inductor/158104 -> ciflow/inductor/158104 2025-09-07T07:34:58.4705447Z * [new tag] ciflow/inductor/158404 -> ciflow/inductor/158404 2025-09-07T07:34:58.4705717Z * [new tag] ciflow/inductor/158647 -> ciflow/inductor/158647 2025-09-07T07:34:58.4706265Z * [new tag] ciflow/inductor/158932 -> ciflow/inductor/158932 2025-09-07T07:34:58.4706623Z * [new tag] ciflow/inductor/159146 -> ciflow/inductor/159146 2025-09-07T07:34:58.4707148Z * [new tag] ciflow/inductor/159158 -> ciflow/inductor/159158 2025-09-07T07:34:58.4707611Z * [new tag] ciflow/inductor/159274 -> ciflow/inductor/159274 2025-09-07T07:34:58.4708077Z * [new tag] ciflow/inductor/159664 -> ciflow/inductor/159664 2025-09-07T07:34:58.4709487Z * [new tag] ciflow/inductor/159778 -> ciflow/inductor/159778 2025-09-07T07:34:58.4709699Z * [new tag] ciflow/inductor/159835 -> ciflow/inductor/159835 2025-09-07T07:34:58.4709830Z * [new tag] ciflow/inductor/159944 -> ciflow/inductor/159944 2025-09-07T07:34:58.4712132Z * [new tag] ciflow/inductor/160161 -> ciflow/inductor/160161 2025-09-07T07:34:58.4712348Z * [new tag] ciflow/inductor/160174 -> ciflow/inductor/160174 2025-09-07T07:34:58.4712470Z * [new tag] ciflow/inductor/160323 -> ciflow/inductor/160323 2025-09-07T07:34:58.4712594Z * [new tag] ciflow/inductor/160324 -> ciflow/inductor/160324 2025-09-07T07:34:58.4712715Z * [new tag] ciflow/inductor/160325 -> ciflow/inductor/160325 2025-09-07T07:34:58.4712839Z * [new tag] ciflow/inductor/160326 -> ciflow/inductor/160326 2025-09-07T07:34:58.4712999Z * [new tag] ciflow/inductor/160327 -> ciflow/inductor/160327 2025-09-07T07:34:58.4713134Z * [new tag] ciflow/inductor/160328 -> ciflow/inductor/160328 2025-09-07T07:34:58.4714018Z * [new tag] ciflow/inductor/160329 -> ciflow/inductor/160329 2025-09-07T07:34:58.4714250Z * [new tag] ciflow/inductor/160480 -> ciflow/inductor/160480 2025-09-07T07:34:58.4714776Z * [new tag] ciflow/inductor/160532 -> ciflow/inductor/160532 2025-09-07T07:34:58.4716093Z * [new tag] ciflow/inductor/160539 -> ciflow/inductor/160539 2025-09-07T07:34:58.4716367Z * [new tag] ciflow/inductor/160580 -> ciflow/inductor/160580 2025-09-07T07:34:58.4716675Z * [new tag] ciflow/inductor/160685 -> ciflow/inductor/160685 2025-09-07T07:34:58.4717189Z * [new tag] ciflow/inductor/160686 -> ciflow/inductor/160686 2025-09-07T07:34:58.4718212Z * [new tag] ciflow/inductor/160687 -> ciflow/inductor/160687 2025-09-07T07:34:58.4718406Z * [new tag] ciflow/inductor/160688 -> ciflow/inductor/160688 2025-09-07T07:34:58.4718736Z * [new tag] ciflow/inductor/160690 -> ciflow/inductor/160690 2025-09-07T07:34:58.4719236Z * [new tag] ciflow/inductor/160706 -> ciflow/inductor/160706 2025-09-07T07:34:58.4719612Z * [new tag] ciflow/inductor/160729 -> ciflow/inductor/160729 2025-09-07T07:34:58.4720529Z * [new tag] ciflow/inductor/160798 -> ciflow/inductor/160798 2025-09-07T07:34:58.4720802Z * [new tag] ciflow/inductor/160836 -> ciflow/inductor/160836 2025-09-07T07:34:58.4721226Z * [new tag] ciflow/inductor/160843 -> ciflow/inductor/160843 2025-09-07T07:34:58.4722195Z * [new tag] ciflow/inductor/160869 -> ciflow/inductor/160869 2025-09-07T07:34:58.4722465Z * [new tag] ciflow/inductor/160920 -> ciflow/inductor/160920 2025-09-07T07:34:58.4722719Z * [new tag] ciflow/inductor/160943 -> ciflow/inductor/160943 2025-09-07T07:34:58.4723248Z * [new tag] ciflow/inductor/161092 -> ciflow/inductor/161092 2025-09-07T07:34:58.4723676Z * [new tag] ciflow/inductor/161093 -> ciflow/inductor/161093 2025-09-07T07:34:58.4724613Z * [new tag] ciflow/inductor/161109 -> ciflow/inductor/161109 2025-09-07T07:34:58.4724840Z * [new tag] ciflow/inductor/161118 -> ciflow/inductor/161118 2025-09-07T07:34:58.4725291Z * [new tag] ciflow/inductor/161178 -> ciflow/inductor/161178 2025-09-07T07:34:58.4725742Z * [new tag] ciflow/inductor/161246 -> ciflow/inductor/161246 2025-09-07T07:34:58.4726249Z * [new tag] ciflow/inductor/161349 -> ciflow/inductor/161349 2025-09-07T07:34:58.4726637Z * [new tag] ciflow/inductor/161350 -> ciflow/inductor/161350 2025-09-07T07:34:58.4727467Z * [new tag] ciflow/inductor/161351 -> ciflow/inductor/161351 2025-09-07T07:34:58.4727851Z * [new tag] ciflow/inductor/161397 -> ciflow/inductor/161397 2025-09-07T07:34:58.4728392Z * [new tag] ciflow/inductor/161404 -> ciflow/inductor/161404 2025-09-07T07:34:58.4728881Z * [new tag] ciflow/inductor/161405 -> ciflow/inductor/161405 2025-09-07T07:34:58.4729559Z * [new tag] ciflow/inductor/161406 -> ciflow/inductor/161406 2025-09-07T07:34:58.4729910Z * [new tag] ciflow/inductor/161410 -> ciflow/inductor/161410 2025-09-07T07:34:58.4730436Z * [new tag] ciflow/inductor/161414 -> ciflow/inductor/161414 2025-09-07T07:34:58.4731428Z * [new tag] ciflow/inductor/161442 -> ciflow/inductor/161442 2025-09-07T07:34:58.4731686Z * [new tag] ciflow/inductor/161458 -> ciflow/inductor/161458 2025-09-07T07:34:58.4732102Z * [new tag] ciflow/inductor/161468 -> ciflow/inductor/161468 2025-09-07T07:34:58.4732594Z * [new tag] ciflow/inductor/161469 -> ciflow/inductor/161469 2025-09-07T07:34:58.4733469Z * [new tag] ciflow/inductor/161485 -> ciflow/inductor/161485 2025-09-07T07:34:58.4733749Z * [new tag] ciflow/inductor/161499 -> ciflow/inductor/161499 2025-09-07T07:34:58.4734290Z * [new tag] ciflow/inductor/161534 -> ciflow/inductor/161534 2025-09-07T07:34:58.4734659Z * [new tag] ciflow/inductor/161595 -> ciflow/inductor/161595 2025-09-07T07:34:58.4735134Z * [new tag] ciflow/inductor/161596 -> ciflow/inductor/161596 2025-09-07T07:34:58.4738575Z * [new tag] ciflow/inductor/161630 -> ciflow/inductor/161630 2025-09-07T07:34:58.4738764Z * [new tag] ciflow/inductor/161667 -> ciflow/inductor/161667 2025-09-07T07:34:58.4738882Z * [new tag] ciflow/inductor/161670 -> ciflow/inductor/161670 2025-09-07T07:34:58.4739009Z * [new tag] ciflow/inductor/161673 -> ciflow/inductor/161673 2025-09-07T07:34:58.4739132Z * [new tag] ciflow/inductor/161674 -> ciflow/inductor/161674 2025-09-07T07:34:58.4739412Z * [new tag] ciflow/inductor/161675 -> ciflow/inductor/161675 2025-09-07T07:34:58.4739540Z * [new tag] ciflow/inductor/161693 -> ciflow/inductor/161693 2025-09-07T07:34:58.4739662Z * [new tag] ciflow/inductor/161695 -> ciflow/inductor/161695 2025-09-07T07:34:58.4739785Z * [new tag] ciflow/inductor/161715 -> ciflow/inductor/161715 2025-09-07T07:34:58.4740246Z * [new tag] ciflow/inductor/161730 -> ciflow/inductor/161730 2025-09-07T07:34:58.4740515Z * [new tag] ciflow/inductor/161732 -> ciflow/inductor/161732 2025-09-07T07:34:58.4741263Z * [new tag] ciflow/inductor/161744 -> ciflow/inductor/161744 2025-09-07T07:34:58.4741408Z * [new tag] ciflow/inductor/161746 -> ciflow/inductor/161746 2025-09-07T07:34:58.4742181Z * [new tag] ciflow/inductor/161747 -> ciflow/inductor/161747 2025-09-07T07:34:58.4742415Z * [new tag] ciflow/inductor/161819 -> ciflow/inductor/161819 2025-09-07T07:34:58.4742892Z * [new tag] ciflow/inductor/161821 -> ciflow/inductor/161821 2025-09-07T07:34:58.4743335Z * [new tag] ciflow/inductor/161828 -> ciflow/inductor/161828 2025-09-07T07:34:58.4743744Z * [new tag] ciflow/inductor/161879 -> ciflow/inductor/161879 2025-09-07T07:34:58.4744126Z * [new tag] ciflow/inductor/161880 -> ciflow/inductor/161880 2025-09-07T07:34:58.4746265Z * [new tag] ciflow/inductor/161881 -> ciflow/inductor/161881 2025-09-07T07:34:58.4746457Z * [new tag] ciflow/inductor/161907 -> ciflow/inductor/161907 2025-09-07T07:34:58.4746583Z * [new tag] ciflow/inductor/161914 -> ciflow/inductor/161914 2025-09-07T07:34:58.4746709Z * [new tag] ciflow/inductor/161924 -> ciflow/inductor/161924 2025-09-07T07:34:58.4746870Z * [new tag] ciflow/inductor/161936 -> ciflow/inductor/161936 2025-09-07T07:34:58.4747259Z * [new tag] ciflow/inductor/161938 -> ciflow/inductor/161938 2025-09-07T07:34:58.4748047Z * [new tag] ciflow/inductor/161939 -> ciflow/inductor/161939 2025-09-07T07:34:58.4748227Z * [new tag] ciflow/inductor/161940 -> ciflow/inductor/161940 2025-09-07T07:34:58.4748559Z * [new tag] ciflow/inductor/161955 -> ciflow/inductor/161955 2025-09-07T07:34:58.4750791Z * [new tag] ciflow/inductor/161957 -> ciflow/inductor/161957 2025-09-07T07:34:58.4751214Z * [new tag] ciflow/inductor/161975 -> ciflow/inductor/161975 2025-09-07T07:34:58.4751351Z * [new tag] ciflow/inductor/161977 -> ciflow/inductor/161977 2025-09-07T07:34:58.4751496Z * [new tag] ciflow/inductor/161978 -> ciflow/inductor/161978 2025-09-07T07:34:58.4751628Z * [new tag] ciflow/inductor/161979 -> ciflow/inductor/161979 2025-09-07T07:34:58.4751962Z * [new tag] ciflow/inductor/161980 -> ciflow/inductor/161980 2025-09-07T07:34:58.4752281Z * [new tag] ciflow/inductor/161988 -> ciflow/inductor/161988 2025-09-07T07:34:58.4752768Z * [new tag] ciflow/inductor/161994 -> ciflow/inductor/161994 2025-09-07T07:34:58.4752930Z * [new tag] ciflow/inductor/162013 -> ciflow/inductor/162013 2025-09-07T07:34:58.4753372Z * [new tag] ciflow/inductor/162014 -> ciflow/inductor/162014 2025-09-07T07:34:58.4753740Z * [new tag] ciflow/inductor/162017 -> ciflow/inductor/162017 2025-09-07T07:34:58.4754190Z * [new tag] ciflow/inductor/162021 -> ciflow/inductor/162021 2025-09-07T07:34:58.4756634Z * [new tag] ciflow/inductor/162023 -> ciflow/inductor/162023 2025-09-07T07:34:58.4756785Z * [new tag] ciflow/inductor/162027 -> ciflow/inductor/162027 2025-09-07T07:34:58.4756985Z * [new tag] ciflow/inductor/162029 -> ciflow/inductor/162029 2025-09-07T07:34:58.4757183Z * [new tag] ciflow/inductor/162030 -> ciflow/inductor/162030 2025-09-07T07:34:58.4757367Z * [new tag] ciflow/inductor/162031 -> ciflow/inductor/162031 2025-09-07T07:34:58.4757487Z * [new tag] ciflow/inductor/162033 -> ciflow/inductor/162033 2025-09-07T07:34:58.4757662Z * [new tag] ciflow/inductor/162052 -> ciflow/inductor/162052 2025-09-07T07:34:58.4763926Z * [new tag] ciflow/inductor/162053 -> ciflow/inductor/162053 2025-09-07T07:34:58.4764100Z * [new tag] ciflow/inductor/162056 -> ciflow/inductor/162056 2025-09-07T07:34:58.4764239Z * [new tag] ciflow/inductor/162063 -> ciflow/inductor/162063 2025-09-07T07:34:58.4764378Z * [new tag] ciflow/inductor/162066 -> ciflow/inductor/162066 2025-09-07T07:34:58.4764543Z * [new tag] ciflow/inductor/162068 -> ciflow/inductor/162068 2025-09-07T07:34:58.4764682Z * [new tag] ciflow/inductor/162081 -> ciflow/inductor/162081 2025-09-07T07:34:58.4764818Z * [new tag] ciflow/inductor/162088 -> ciflow/inductor/162088 2025-09-07T07:34:58.4764961Z * [new tag] ciflow/inductor/162089 -> ciflow/inductor/162089 2025-09-07T07:34:58.4765087Z * [new tag] ciflow/inductor/162094 -> ciflow/inductor/162094 2025-09-07T07:34:58.4765250Z * [new tag] ciflow/inductor/162098 -> ciflow/inductor/162098 2025-09-07T07:34:58.4765377Z * [new tag] ciflow/inductor/162101 -> ciflow/inductor/162101 2025-09-07T07:34:58.4765503Z * [new tag] ciflow/inductor/162102 -> ciflow/inductor/162102 2025-09-07T07:34:58.4765634Z * [new tag] ciflow/inductor/162104 -> ciflow/inductor/162104 2025-09-07T07:34:58.4765774Z * [new tag] ciflow/inductor/162106 -> ciflow/inductor/162106 2025-09-07T07:34:58.4765904Z * [new tag] ciflow/inductor/162108 -> ciflow/inductor/162108 2025-09-07T07:34:58.4766027Z * [new tag] ciflow/inductor/162126 -> ciflow/inductor/162126 2025-09-07T07:34:58.4766164Z * [new tag] ciflow/inductor/162149 -> ciflow/inductor/162149 2025-09-07T07:34:58.4768189Z * [new tag] ciflow/inductor/162164 -> ciflow/inductor/162164 2025-09-07T07:34:58.4768489Z * [new tag] ciflow/inductor/162166 -> ciflow/inductor/162166 2025-09-07T07:34:58.4768637Z * [new tag] ciflow/inductor/162169 -> ciflow/inductor/162169 2025-09-07T07:34:58.4768755Z * [new tag] ciflow/inductor/162170 -> ciflow/inductor/162170 2025-09-07T07:34:58.4768879Z * [new tag] ciflow/inductor/162171 -> ciflow/inductor/162171 2025-09-07T07:34:58.4769005Z * [new tag] ciflow/inductor/162183 -> ciflow/inductor/162183 2025-09-07T07:34:58.4769135Z * [new tag] ciflow/inductor/162189 -> ciflow/inductor/162189 2025-09-07T07:34:58.4769257Z * [new tag] ciflow/inductor/162190 -> ciflow/inductor/162190 2025-09-07T07:34:58.4769386Z * [new tag] ciflow/inductor/162191 -> ciflow/inductor/162191 2025-09-07T07:34:58.4769809Z * [new tag] ciflow/inductor/162194 -> ciflow/inductor/162194 2025-09-07T07:34:58.4770321Z * [new tag] ciflow/inductor/162200 -> ciflow/inductor/162200 2025-09-07T07:34:58.4770968Z * [new tag] ciflow/inductor/162201 -> ciflow/inductor/162201 2025-09-07T07:34:58.4771380Z * [new tag] ciflow/inductor/162208 -> ciflow/inductor/162208 2025-09-07T07:34:58.4772317Z * [new tag] ciflow/inductor/162211 -> ciflow/inductor/162211 2025-09-07T07:34:58.4772800Z * [new tag] ciflow/inductor/162216 -> ciflow/inductor/162216 2025-09-07T07:34:58.4772960Z * [new tag] ciflow/inductor/162220 -> ciflow/inductor/162220 2025-09-07T07:34:58.4773997Z * [new tag] ciflow/inductor/162222 -> ciflow/inductor/162222 2025-09-07T07:34:58.4774234Z * [new tag] ciflow/inductor/162227 -> ciflow/inductor/162227 2025-09-07T07:34:58.4774554Z * [new tag] ciflow/inductor/162238 -> ciflow/inductor/162238 2025-09-07T07:34:58.4775047Z * [new tag] ciflow/inductor/162239 -> ciflow/inductor/162239 2025-09-07T07:34:58.4775526Z * [new tag] ciflow/inductor/162240 -> ciflow/inductor/162240 2025-09-07T07:34:58.4775930Z * [new tag] ciflow/inductor/162244 -> ciflow/inductor/162244 2025-09-07T07:34:58.4776487Z * [new tag] ciflow/inductor/162245 -> ciflow/inductor/162245 2025-09-07T07:34:58.4777242Z * [new tag] ciflow/inductor/162262 -> ciflow/inductor/162262 2025-09-07T07:34:58.4777581Z * [new tag] ciflow/inductor/162275 -> ciflow/inductor/162275 2025-09-07T07:34:58.4778166Z * [new tag] ciflow/inductor/162278 -> ciflow/inductor/162278 2025-09-07T07:34:58.4778406Z * [new tag] ciflow/inductor/162284 -> ciflow/inductor/162284 2025-09-07T07:34:58.4778906Z * [new tag] ciflow/inductor/162286 -> ciflow/inductor/162286 2025-09-07T07:34:58.4779361Z * [new tag] ciflow/inductor/162288 -> ciflow/inductor/162288 2025-09-07T07:34:58.4780515Z * [new tag] ciflow/inductor/162293 -> ciflow/inductor/162293 2025-09-07T07:34:58.4780724Z * [new tag] ciflow/inductor/162294 -> ciflow/inductor/162294 2025-09-07T07:34:58.4781096Z * [new tag] ciflow/inductor/162295 -> ciflow/inductor/162295 2025-09-07T07:34:58.4781545Z * [new tag] ciflow/inductor/162296 -> ciflow/inductor/162296 2025-09-07T07:34:58.4782103Z * [new tag] ciflow/inductor/162298 -> ciflow/inductor/162298 2025-09-07T07:34:58.4782467Z * [new tag] ciflow/inductor/162307 -> ciflow/inductor/162307 2025-09-07T07:34:58.4783083Z * [new tag] ciflow/inductor/162309 -> ciflow/inductor/162309 2025-09-07T07:34:58.4783398Z * [new tag] ciflow/inductor/162311 -> ciflow/inductor/162311 2025-09-07T07:34:58.4783996Z * [new tag] ciflow/inductor/162312 -> ciflow/inductor/162312 2025-09-07T07:34:58.4784300Z * [new tag] ciflow/inductor/162315 -> ciflow/inductor/162315 2025-09-07T07:34:58.4784887Z * [new tag] ciflow/inductor/162316 -> ciflow/inductor/162316 2025-09-07T07:34:58.4785233Z * [new tag] ciflow/inductor/162318 -> ciflow/inductor/162318 2025-09-07T07:34:58.4785769Z * [new tag] ciflow/inductor/162323 -> ciflow/inductor/162323 2025-09-07T07:34:58.4788512Z * [new tag] ciflow/inductor/162341 -> ciflow/inductor/162341 2025-09-07T07:34:58.4788725Z * [new tag] ciflow/inductor/162345 -> ciflow/inductor/162345 2025-09-07T07:34:58.4788869Z * [new tag] ciflow/inductor/3b9a386 -> ciflow/inductor/3b9a386 2025-09-07T07:34:58.4788997Z * [new tag] ciflow/inductor/3d4b92b -> ciflow/inductor/3d4b92b 2025-09-07T07:34:58.4789122Z * [new tag] ciflow/inductor/d224ac7 -> ciflow/inductor/d224ac7 2025-09-07T07:34:58.4789292Z * [new tag] ciflow/linux-aarch64/157994 -> ciflow/linux-aarch64/157994 2025-09-07T07:34:58.4789430Z * [new tag] ciflow/linux-aarch64/159737 -> ciflow/linux-aarch64/159737 2025-09-07T07:34:58.4789830Z * [new tag] ciflow/linux-aarch64/160078 -> ciflow/linux-aarch64/160078 2025-09-07T07:34:58.4789957Z * [new tag] ciflow/mps/157553 -> ciflow/mps/157553 2025-09-07T07:34:58.4790552Z * [new tag] ciflow/mps/157635 -> ciflow/mps/157635 2025-09-07T07:34:58.4790860Z * [new tag] ciflow/mps/161988 -> ciflow/mps/161988 2025-09-07T07:34:58.4791259Z * [new tag] ciflow/mps/162108 -> ciflow/mps/162108 2025-09-07T07:34:58.4791684Z * [new tag] ciflow/mps/162153 -> ciflow/mps/162153 2025-09-07T07:34:58.4792185Z * [new tag] ciflow/mps/162281 -> ciflow/mps/162281 2025-09-07T07:34:58.4792681Z * [new tag] ciflow/nightly/156049 -> ciflow/nightly/156049 2025-09-07T07:34:58.4793099Z * [new tag] ciflow/nightly/158104 -> ciflow/nightly/158104 2025-09-07T07:34:58.4793667Z * [new tag] ciflow/op-benchmark/157994 -> ciflow/op-benchmark/157994 2025-09-07T07:34:58.4794339Z * [new tag] ciflow/periodic-rocm-mi300/161529 -> ciflow/periodic-rocm-mi300/161529 2025-09-07T07:34:58.4794631Z * [new tag] ciflow/periodic-rocm-mi300/161715 -> ciflow/periodic-rocm-mi300/161715 2025-09-07T07:34:58.4795651Z * [new tag] ciflow/periodic/054a2fd -> ciflow/periodic/054a2fd 2025-09-07T07:34:58.4795917Z * [new tag] ciflow/periodic/156703 -> ciflow/periodic/156703 2025-09-07T07:34:58.4796214Z * [new tag] ciflow/periodic/161715 -> ciflow/periodic/161715 2025-09-07T07:34:58.4796596Z * [new tag] ciflow/periodic/162021 -> ciflow/periodic/162021 2025-09-07T07:34:58.4796970Z * [new tag] ciflow/periodic/162323 -> ciflow/periodic/162323 2025-09-07T07:34:58.4797661Z * [new tag] ciflow/periodic/2a6d37d -> ciflow/periodic/2a6d37d 2025-09-07T07:34:58.4797992Z * [new tag] ciflow/periodic/317eeb8 -> ciflow/periodic/317eeb8 2025-09-07T07:34:58.4798883Z * [new tag] ciflow/periodic/3c32 -> ciflow/periodic/3c32 2025-09-07T07:34:58.4799071Z * [new tag] ciflow/periodic/3e98831 -> ciflow/periodic/3e98831 2025-09-07T07:34:58.4800134Z * [new tag] ciflow/periodic/94512-point -> ciflow/periodic/94512-point 2025-09-07T07:34:58.4800520Z * [new tag] ciflow/periodic/csl/test87519 -> ciflow/periodic/csl/test87519 2025-09-07T07:34:58.4801680Z * [new tag] ciflow/periodic/csltest88275 -> ciflow/periodic/csltest88275 2025-09-07T07:34:58.4801954Z * [new tag] ciflow/periodic/csltest88761 -> ciflow/periodic/csltest88761 2025-09-07T07:34:58.4803051Z * [new tag] ciflow/periodic/release_1.12 -> ciflow/periodic/release_1.12 2025-09-07T07:34:58.4803448Z * [new tag] ciflow/periodic/release_1.12.0 -> ciflow/periodic/release_1.12.0 2025-09-07T07:34:58.4804514Z * [new tag] ciflow/periodic/sha-ec5b83 -> ciflow/periodic/sha-ec5b83 2025-09-07T07:34:58.4805129Z * [new tag] ciflow/rocm-mi300/154170 -> ciflow/rocm-mi300/154170 2025-09-07T07:34:58.4805354Z * [new tag] ciflow/rocm-mi300/158747 -> ciflow/rocm-mi300/158747 2025-09-07T07:34:58.4805729Z * [new tag] ciflow/rocm-mi300/159146 -> ciflow/rocm-mi300/159146 2025-09-07T07:34:58.4806332Z * [new tag] ciflow/rocm-mi300/159158 -> ciflow/rocm-mi300/159158 2025-09-07T07:34:58.4806870Z * [new tag] ciflow/rocm-mi300/161715 -> ciflow/rocm-mi300/161715 2025-09-07T07:34:58.4807120Z * [new tag] ciflow/rocm-mi300/161957 -> ciflow/rocm-mi300/161957 2025-09-07T07:34:58.4807726Z * [new tag] ciflow/rocm-mi300/162053 -> ciflow/rocm-mi300/162053 2025-09-07T07:34:58.4808002Z * [new tag] ciflow/rocm-mi300/162056 -> ciflow/rocm-mi300/162056 2025-09-07T07:34:58.4809020Z * [new tag] ciflow/rocm-mi300/162112 -> ciflow/rocm-mi300/162112 2025-09-07T07:34:58.4809244Z * [new tag] ciflow/rocm-mi300/162245 -> ciflow/rocm-mi300/162245 2025-09-07T07:34:58.4809514Z * [new tag] ciflow/rocm-mi300/162278 -> ciflow/rocm-mi300/162278 2025-09-07T07:34:58.4810383Z * [new tag] ciflow/rocm-mi300/162288 -> ciflow/rocm-mi300/162288 2025-09-07T07:34:58.4810621Z * [new tag] ciflow/rocm-mi355/162053 -> ciflow/rocm-mi355/162053 2025-09-07T07:34:58.4811080Z * [new tag] ciflow/rocm-mi355/162056 -> ciflow/rocm-mi355/162056 2025-09-07T07:34:58.4813346Z * [new tag] ciflow/rocm/148492 -> ciflow/rocm/148492 2025-09-07T07:34:58.4813526Z * [new tag] ciflow/rocm/154170 -> ciflow/rocm/154170 2025-09-07T07:34:58.4813654Z * [new tag] ciflow/rocm/156491 -> ciflow/rocm/156491 2025-09-07T07:34:58.4813785Z * [new tag] ciflow/rocm/156592 -> ciflow/rocm/156592 2025-09-07T07:34:58.4813943Z * [new tag] ciflow/rocm/158747 -> ciflow/rocm/158747 2025-09-07T07:34:58.4814292Z * [new tag] ciflow/rocm/159146 -> ciflow/rocm/159146 2025-09-07T07:34:58.4815270Z * [new tag] ciflow/rocm/159158 -> ciflow/rocm/159158 2025-09-07T07:34:58.4815416Z * [new tag] ciflow/rocm/161715 -> ciflow/rocm/161715 2025-09-07T07:34:58.4815759Z * [new tag] ciflow/rocm/161972 -> ciflow/rocm/161972 2025-09-07T07:34:58.4816164Z * [new tag] ciflow/rocm/162052 -> ciflow/rocm/162052 2025-09-07T07:34:58.4816537Z * [new tag] ciflow/rocm/162053 -> ciflow/rocm/162053 2025-09-07T07:34:58.4818198Z * [new tag] ciflow/rocm/162056 -> ciflow/rocm/162056 2025-09-07T07:34:58.4818343Z * [new tag] ciflow/rocm/162112 -> ciflow/rocm/162112 2025-09-07T07:34:58.4818461Z * [new tag] ciflow/rocm/162278 -> ciflow/rocm/162278 2025-09-07T07:34:58.4818696Z * [new tag] ciflow/rocm/162288 -> ciflow/rocm/162288 2025-09-07T07:34:58.4819231Z * [new tag] ciflow/rocm/162305 -> ciflow/rocm/162305 2025-09-07T07:34:58.4819905Z * [new tag] ciflow/slow/01c7106 -> ciflow/slow/01c7106 2025-09-07T07:34:58.4820482Z * [new tag] ciflow/slow/0577043 -> ciflow/slow/0577043 2025-09-07T07:34:58.4821172Z * [new tag] ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym -> ciflow/slow/0d5b74da0cab798fbfdb9caa53fad816999c8386-sdym 2025-09-07T07:34:58.4821604Z * [new tag] ciflow/slow/0e81104 -> ciflow/slow/0e81104 2025-09-07T07:34:58.4821856Z * [new tag] ciflow/slow/161395 -> ciflow/slow/161395 2025-09-07T07:34:58.4824726Z * [new tag] ciflow/slow/1732077 -> ciflow/slow/1732077 2025-09-07T07:34:58.4824884Z * [new tag] ciflow/slow/187eb7c -> ciflow/slow/187eb7c 2025-09-07T07:34:58.4825015Z * [new tag] ciflow/slow/1faef89 -> ciflow/slow/1faef89 2025-09-07T07:34:58.4825134Z * [new tag] ciflow/slow/3920ec1 -> ciflow/slow/3920ec1 2025-09-07T07:34:58.4825242Z * [new tag] ciflow/slow/3b7c6b2 -> ciflow/slow/3b7c6b2 2025-09-07T07:34:58.4825392Z * [new tag] ciflow/slow/59a3759 -> ciflow/slow/59a3759 2025-09-07T07:34:58.4825829Z * [new tag] ciflow/slow/70ef0bb -> ciflow/slow/70ef0bb 2025-09-07T07:34:58.4826768Z * [new tag] ciflow/slow/788ff06 -> ciflow/slow/788ff06 2025-09-07T07:34:58.4827235Z * [new tag] ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym -> ciflow/slow/8751002215790a3a88750faa8f4366933e296693-sdym 2025-09-07T07:34:58.4827709Z * [new tag] ciflow/slow/9d85864 -> ciflow/slow/9d85864 2025-09-07T07:34:58.4828183Z * [new tag] ciflow/slow/9ffad5b -> ciflow/slow/9ffad5b 2025-09-07T07:34:58.4829425Z * [new tag] ciflow/slow/a206e8b -> ciflow/slow/a206e8b 2025-09-07T07:34:58.4829629Z * [new tag] ciflow/slow/a837609 -> ciflow/slow/a837609 2025-09-07T07:34:58.4829756Z * [new tag] ciflow/slow/af841f3 -> ciflow/slow/af841f3 2025-09-07T07:34:58.4832342Z * [new tag] ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym -> ciflow/slow/da3aba1e46157c4df504b067477cdf2b3c96b194-sdym 2025-09-07T07:34:58.4832531Z * [new tag] ciflow/triton_binaries/162329 -> ciflow/triton_binaries/162329 2025-09-07T07:34:58.4832688Z * [new tag] ciflow/trunk/113258 -> ciflow/trunk/113258 2025-09-07T07:34:58.4832807Z * [new tag] ciflow/trunk/137400 -> ciflow/trunk/137400 2025-09-07T07:34:58.4832930Z * [new tag] ciflow/trunk/148180 -> ciflow/trunk/148180 2025-09-07T07:34:58.4833076Z * [new tag] ciflow/trunk/148328 -> ciflow/trunk/148328 2025-09-07T07:34:58.4833415Z * [new tag] ciflow/trunk/148492 -> ciflow/trunk/148492 2025-09-07T07:34:58.4835171Z * [new tag] ciflow/trunk/148919 -> ciflow/trunk/148919 2025-09-07T07:34:58.4835342Z * [new tag] ciflow/trunk/152624 -> ciflow/trunk/152624 2025-09-07T07:34:58.4835464Z * [new tag] ciflow/trunk/154170 -> ciflow/trunk/154170 2025-09-07T07:34:58.4835603Z * [new tag] ciflow/trunk/154694 -> ciflow/trunk/154694 2025-09-07T07:34:58.4835746Z * [new tag] ciflow/trunk/156049 -> ciflow/trunk/156049 2025-09-07T07:34:58.4836064Z * [new tag] ciflow/trunk/156703 -> ciflow/trunk/156703 2025-09-07T07:34:58.4838314Z * [new tag] ciflow/trunk/156711 -> ciflow/trunk/156711 2025-09-07T07:34:58.4838623Z * [new tag] ciflow/trunk/157432 -> ciflow/trunk/157432 2025-09-07T07:34:58.4838786Z * [new tag] ciflow/trunk/157685 -> ciflow/trunk/157685 2025-09-07T07:34:58.4838900Z * [new tag] ciflow/trunk/157689 -> ciflow/trunk/157689 2025-09-07T07:34:58.4839149Z * [new tag] ciflow/trunk/157699 -> ciflow/trunk/157699 2025-09-07T07:34:58.4839276Z * [new tag] ciflow/trunk/157813 -> ciflow/trunk/157813 2025-09-07T07:34:58.4839562Z * [new tag] ciflow/trunk/157994 -> ciflow/trunk/157994 2025-09-07T07:34:58.4840263Z * [new tag] ciflow/trunk/158091 -> ciflow/trunk/158091 2025-09-07T07:34:58.4840749Z * [new tag] ciflow/trunk/158104 -> ciflow/trunk/158104 2025-09-07T07:34:58.4840875Z * [new tag] ciflow/trunk/158404 -> ciflow/trunk/158404 2025-09-07T07:34:58.4842822Z * [new tag] ciflow/trunk/158647 -> ciflow/trunk/158647 2025-09-07T07:34:58.4842999Z * [new tag] ciflow/trunk/158846 -> ciflow/trunk/158846 2025-09-07T07:34:58.4843133Z * [new tag] ciflow/trunk/159158 -> ciflow/trunk/159158 2025-09-07T07:34:58.4843253Z * [new tag] ciflow/trunk/159682 -> ciflow/trunk/159682 2025-09-07T07:34:58.4843422Z * [new tag] ciflow/trunk/159835 -> ciflow/trunk/159835 2025-09-07T07:34:58.4843777Z * [new tag] ciflow/trunk/160161 -> ciflow/trunk/160161 2025-09-07T07:34:58.4844459Z * [new tag] ciflow/trunk/160236 -> ciflow/trunk/160236 2025-09-07T07:34:58.4844655Z * [new tag] ciflow/trunk/160329 -> ciflow/trunk/160329 2025-09-07T07:34:58.4845454Z * [new tag] ciflow/trunk/160480 -> ciflow/trunk/160480 2025-09-07T07:34:58.4845913Z * [new tag] ciflow/trunk/160532 -> ciflow/trunk/160532 2025-09-07T07:34:58.4847130Z * [new tag] ciflow/trunk/160836 -> ciflow/trunk/160836 2025-09-07T07:34:58.4847609Z * [new tag] ciflow/trunk/160843 -> ciflow/trunk/160843 2025-09-07T07:34:58.4847753Z * [new tag] ciflow/trunk/160869 -> ciflow/trunk/160869 2025-09-07T07:34:58.4850937Z * [new tag] ciflow/trunk/160940 -> ciflow/trunk/160940 2025-09-07T07:34:58.4851250Z * [new tag] ciflow/trunk/160943 -> ciflow/trunk/160943 2025-09-07T07:34:58.4851400Z * [new tag] ciflow/trunk/160953 -> ciflow/trunk/160953 2025-09-07T07:34:58.4851546Z * [new tag] ciflow/trunk/161035 -> ciflow/trunk/161035 2025-09-07T07:34:58.4851663Z * [new tag] ciflow/trunk/161178 -> ciflow/trunk/161178 2025-09-07T07:34:58.4851920Z * [new tag] ciflow/trunk/161349 -> ciflow/trunk/161349 2025-09-07T07:34:58.4852061Z * [new tag] ciflow/trunk/161350 -> ciflow/trunk/161350 2025-09-07T07:34:58.4852272Z * [new tag] ciflow/trunk/161351 -> ciflow/trunk/161351 2025-09-07T07:34:58.4852664Z * [new tag] ciflow/trunk/161395 -> ciflow/trunk/161395 2025-09-07T07:34:58.4853445Z * [new tag] ciflow/trunk/161405 -> ciflow/trunk/161405 2025-09-07T07:34:58.4853671Z * [new tag] ciflow/trunk/161406 -> ciflow/trunk/161406 2025-09-07T07:34:58.4854138Z * [new tag] ciflow/trunk/161410 -> ciflow/trunk/161410 2025-09-07T07:34:58.4855531Z * [new tag] ciflow/trunk/161468 -> ciflow/trunk/161468 2025-09-07T07:34:58.4855888Z * [new tag] ciflow/trunk/161499 -> ciflow/trunk/161499 2025-09-07T07:34:58.4856050Z * [new tag] ciflow/trunk/161527 -> ciflow/trunk/161527 2025-09-07T07:34:58.4856177Z * [new tag] ciflow/trunk/161534 -> ciflow/trunk/161534 2025-09-07T07:34:58.4856740Z * [new tag] ciflow/trunk/161591 -> ciflow/trunk/161591 2025-09-07T07:34:58.4857102Z * [new tag] ciflow/trunk/161595 -> ciflow/trunk/161595 2025-09-07T07:34:58.4857533Z * [new tag] ciflow/trunk/161596 -> ciflow/trunk/161596 2025-09-07T07:34:58.4860523Z * [new tag] ciflow/trunk/161633 -> ciflow/trunk/161633 2025-09-07T07:34:58.4860830Z * [new tag] ciflow/trunk/161634 -> ciflow/trunk/161634 2025-09-07T07:34:58.4860981Z * [new tag] ciflow/trunk/161635 -> ciflow/trunk/161635 2025-09-07T07:34:58.4861330Z * [new tag] ciflow/trunk/161667 -> ciflow/trunk/161667 2025-09-07T07:34:58.4861450Z * [new tag] ciflow/trunk/161670 -> ciflow/trunk/161670 2025-09-07T07:34:58.4861560Z * [new tag] ciflow/trunk/161692 -> ciflow/trunk/161692 2025-09-07T07:34:58.4861811Z * [new tag] ciflow/trunk/161693 -> ciflow/trunk/161693 2025-09-07T07:34:58.4861944Z * [new tag] ciflow/trunk/161695 -> ciflow/trunk/161695 2025-09-07T07:34:58.4862055Z * [new tag] ciflow/trunk/161730 -> ciflow/trunk/161730 2025-09-07T07:34:58.4862170Z * [new tag] ciflow/trunk/161744 -> ciflow/trunk/161744 2025-09-07T07:34:58.4862479Z * [new tag] ciflow/trunk/161749 -> ciflow/trunk/161749 2025-09-07T07:34:58.4862926Z * [new tag] ciflow/trunk/161881 -> ciflow/trunk/161881 2025-09-07T07:34:58.4863360Z * [new tag] ciflow/trunk/161924 -> ciflow/trunk/161924 2025-09-07T07:34:58.4864169Z * [new tag] ciflow/trunk/161926 -> ciflow/trunk/161926 2025-09-07T07:34:58.4864291Z * [new tag] ciflow/trunk/161936 -> ciflow/trunk/161936 2025-09-07T07:34:58.4867161Z * [new tag] ciflow/trunk/161952 -> ciflow/trunk/161952 2025-09-07T07:34:58.4867630Z * [new tag] ciflow/trunk/161955 -> ciflow/trunk/161955 2025-09-07T07:34:58.4867915Z * [new tag] ciflow/trunk/161957 -> ciflow/trunk/161957 2025-09-07T07:34:58.4868065Z * [new tag] ciflow/trunk/161959 -> ciflow/trunk/161959 2025-09-07T07:34:58.4868191Z * [new tag] ciflow/trunk/161977 -> ciflow/trunk/161977 2025-09-07T07:34:58.4868306Z * [new tag] ciflow/trunk/161988 -> ciflow/trunk/161988 2025-09-07T07:34:58.4868564Z * [new tag] ciflow/trunk/161994 -> ciflow/trunk/161994 2025-09-07T07:34:58.4869156Z * [new tag] ciflow/trunk/162007 -> ciflow/trunk/162007 2025-09-07T07:34:58.4869313Z * [new tag] ciflow/trunk/162013 -> ciflow/trunk/162013 2025-09-07T07:34:58.4869422Z * [new tag] ciflow/trunk/162017 -> ciflow/trunk/162017 2025-09-07T07:34:58.4869534Z * [new tag] ciflow/trunk/162021 -> ciflow/trunk/162021 2025-09-07T07:34:58.4869695Z * [new tag] ciflow/trunk/162022 -> ciflow/trunk/162022 2025-09-07T07:34:58.4870110Z * [new tag] ciflow/trunk/162040 -> ciflow/trunk/162040 2025-09-07T07:34:58.4870795Z * [new tag] ciflow/trunk/162041 -> ciflow/trunk/162041 2025-09-07T07:34:58.4871355Z * [new tag] ciflow/trunk/162062 -> ciflow/trunk/162062 2025-09-07T07:34:58.4871590Z * [new tag] ciflow/trunk/162066 -> ciflow/trunk/162066 2025-09-07T07:34:58.4871950Z * [new tag] ciflow/trunk/162089 -> ciflow/trunk/162089 2025-09-07T07:34:58.4874409Z * [new tag] ciflow/trunk/162099 -> ciflow/trunk/162099 2025-09-07T07:34:58.4874566Z * [new tag] ciflow/trunk/162104 -> ciflow/trunk/162104 2025-09-07T07:34:58.4874680Z * [new tag] ciflow/trunk/162106 -> ciflow/trunk/162106 2025-09-07T07:34:58.4874801Z * [new tag] ciflow/trunk/162112 -> ciflow/trunk/162112 2025-09-07T07:34:58.4874917Z * [new tag] ciflow/trunk/162119 -> ciflow/trunk/162119 2025-09-07T07:34:58.4875028Z * [new tag] ciflow/trunk/162142 -> ciflow/trunk/162142 2025-09-07T07:34:58.4875147Z * [new tag] ciflow/trunk/162169 -> ciflow/trunk/162169 2025-09-07T07:34:58.4875403Z * [new tag] ciflow/trunk/162183 -> ciflow/trunk/162183 2025-09-07T07:34:58.4875829Z * [new tag] ciflow/trunk/162190 -> ciflow/trunk/162190 2025-09-07T07:34:58.4876247Z * [new tag] ciflow/trunk/162194 -> ciflow/trunk/162194 2025-09-07T07:34:58.4877743Z * [new tag] ciflow/trunk/162200 -> ciflow/trunk/162200 2025-09-07T07:34:58.4877885Z * [new tag] ciflow/trunk/162206 -> ciflow/trunk/162206 2025-09-07T07:34:58.4878026Z * [new tag] ciflow/trunk/162208 -> ciflow/trunk/162208 2025-09-07T07:34:58.4878297Z * [new tag] ciflow/trunk/162222 -> ciflow/trunk/162222 2025-09-07T07:34:58.4878745Z * [new tag] ciflow/trunk/162238 -> ciflow/trunk/162238 2025-09-07T07:34:58.4880908Z * [new tag] ciflow/trunk/162244 -> ciflow/trunk/162244 2025-09-07T07:34:58.4881218Z * [new tag] ciflow/trunk/162267 -> ciflow/trunk/162267 2025-09-07T07:34:58.4881353Z * [new tag] ciflow/trunk/162269 -> ciflow/trunk/162269 2025-09-07T07:34:58.4881493Z * [new tag] ciflow/trunk/162278 -> ciflow/trunk/162278 2025-09-07T07:34:58.4881730Z * [new tag] ciflow/trunk/162286 -> ciflow/trunk/162286 2025-09-07T07:34:58.4881866Z * [new tag] ciflow/trunk/162288 -> ciflow/trunk/162288 2025-09-07T07:34:58.4882064Z * [new tag] ciflow/trunk/162293 -> ciflow/trunk/162293 2025-09-07T07:34:58.4882872Z * [new tag] ciflow/trunk/162310 -> ciflow/trunk/162310 2025-09-07T07:34:58.4883419Z * [new tag] ciflow/trunk/162311 -> ciflow/trunk/162311 2025-09-07T07:34:58.4883574Z * [new tag] ciflow/trunk/162315 -> ciflow/trunk/162315 2025-09-07T07:34:58.4883822Z * [new tag] ciflow/trunk/162325 -> ciflow/trunk/162325 2025-09-07T07:34:58.4884566Z * [new tag] ciflow/trunk/162328 -> ciflow/trunk/162328 2025-09-07T07:34:58.4884821Z * [new tag] ciflow/trunk/162329 -> ciflow/trunk/162329 2025-09-07T07:34:58.4885951Z * [new tag] ciflow/unstable/123 -> ciflow/unstable/123 2025-09-07T07:34:58.4886401Z * [new tag] ciflow/vllm/162292 -> ciflow/vllm/162292 2025-09-07T07:34:58.4887245Z * [new tag] ciflow/win-arm64/156049 -> ciflow/win-arm64/156049 2025-09-07T07:34:58.4887412Z * [new tag] ciflow/win-arm64/158104 -> ciflow/win-arm64/158104 2025-09-07T07:34:58.4891624Z * [new tag] ciflow/xpu/157699 -> ciflow/xpu/157699 2025-09-07T07:34:58.4891922Z * [new tag] ciflow/xpu/157994 -> ciflow/xpu/157994 2025-09-07T07:34:58.4892063Z * [new tag] ciflow/xpu/159459 -> ciflow/xpu/159459 2025-09-07T07:34:58.4892264Z * [new tag] ciflow/xpu/159718 -> ciflow/xpu/159718 2025-09-07T07:34:58.4892396Z * [new tag] ciflow/xpu/159944 -> ciflow/xpu/159944 2025-09-07T07:34:58.4892596Z * [new tag] ciflow/xpu/160867 -> ciflow/xpu/160867 2025-09-07T07:34:58.4892833Z * [new tag] ciflow/xpu/160938 -> ciflow/xpu/160938 2025-09-07T07:34:58.4892940Z * [new tag] ciflow/xpu/160940 -> ciflow/xpu/160940 2025-09-07T07:34:58.4893183Z * [new tag] ciflow/xpu/160953 -> ciflow/xpu/160953 2025-09-07T07:34:58.4893303Z * [new tag] ciflow/xpu/161045 -> ciflow/xpu/161045 2025-09-07T07:34:58.4893411Z * [new tag] ciflow/xpu/161058 -> ciflow/xpu/161058 2025-09-07T07:34:58.4893779Z * [new tag] ciflow/xpu/161246 -> ciflow/xpu/161246 2025-09-07T07:34:58.4895556Z * [new tag] ciflow/xpu/161397 -> ciflow/xpu/161397 2025-09-07T07:34:58.4895845Z * [new tag] ciflow/xpu/161485 -> ciflow/xpu/161485 2025-09-07T07:34:58.4896187Z * [new tag] ciflow/xpu/161988 -> ciflow/xpu/161988 2025-09-07T07:34:58.4896490Z * [new tag] ciflow/xpu/162062 -> ciflow/xpu/162062 2025-09-07T07:34:58.4896598Z * [new tag] cslpull75 -> cslpull75 2025-09-07T07:34:58.4896908Z * [new tag] cslpull76 -> cslpull76 2025-09-07T07:34:58.4897932Z * [new tag] cslpull77 -> cslpull77 2025-09-07T07:34:58.4898079Z * [new tag] cslpull78 -> cslpull78 2025-09-07T07:34:58.4898539Z * [new tag] cslpull79 -> cslpull79 2025-09-07T07:34:58.4899701Z * [new tag] cslpull80 -> cslpull80 2025-09-07T07:34:58.4899991Z * [new tag] cslpull81 -> cslpull81 2025-09-07T07:34:58.4900145Z * [new tag] cslpull82 -> cslpull82 2025-09-07T07:34:58.4902678Z * [new tag] cslpull83 -> cslpull83 2025-09-07T07:34:58.4902822Z * [new tag] cslpull84 -> cslpull84 2025-09-07T07:34:58.4902929Z * [new tag] cslpull85 -> cslpull85 2025-09-07T07:34:58.4903023Z * [new tag] cslpull86 -> cslpull86 2025-09-07T07:34:58.4903126Z * [new tag] cslpull87 -> cslpull87 2025-09-07T07:34:58.4903723Z * [new tag] cslpull88 -> cslpull88 2025-09-07T07:34:58.4904009Z * [new tag] cslpull89 -> cslpull89 2025-09-07T07:34:58.4904410Z * [new tag] cslpull90 -> cslpull90 2025-09-07T07:34:58.4908136Z * [new tag] cslpull91 -> cslpull91 2025-09-07T07:34:58.4908430Z * [new tag] cslpull92 -> cslpull92 2025-09-07T07:34:58.4908563Z * [new tag] flight_5 -> flight_5 2025-09-07T07:34:58.4908691Z * [new tag] flight_5.1 -> flight_5.1 2025-09-07T07:34:58.4908787Z * [new tag] flight_5.2 -> flight_5.2 2025-09-07T07:34:58.4908882Z * [new tag] flight_5.3 -> flight_5.3 2025-09-07T07:34:58.4909120Z * [new tag] forpull1 -> forpull1 2025-09-07T07:34:58.4909267Z * [new tag] malfet/tag-2ef5611 -> malfet/tag-2ef5611 2025-09-07T07:34:58.4909689Z * [new tag] malfet/tag-317b1a0 -> malfet/tag-317b1a0 2025-09-07T07:34:58.4910307Z * [new tag] malfet/tag-ec6f767 -> malfet/tag-ec6f767 2025-09-07T07:34:58.4913565Z * [new tag] nightly-binary -> nightly-binary 2025-09-07T07:34:58.4913884Z * [new tag] sqzhang_flight4_plus -> sqzhang_flight4_plus 2025-09-07T07:34:58.4914060Z * [new tag] sqzhang_flight_3 -> sqzhang_flight_3 2025-09-07T07:34:58.4914392Z * [new tag] trunk/00636e0171e7e733628c408084805442270cf608 -> trunk/00636e0171e7e733628c408084805442270cf608 2025-09-07T07:34:58.4914711Z * [new tag] trunk/019fed39aa6b2dd8c69347378d53423e5efae8d4 -> trunk/019fed39aa6b2dd8c69347378d53423e5efae8d4 2025-09-07T07:34:58.4914992Z * [new tag] trunk/01ab325cc2e0dc221af4d710974e1b9175066544 -> trunk/01ab325cc2e0dc221af4d710974e1b9175066544 2025-09-07T07:34:58.4915244Z * [new tag] trunk/01edcd4df8bf0c7b4cc2d3ec868bd2059eeea83b -> trunk/01edcd4df8bf0c7b4cc2d3ec868bd2059eeea83b 2025-09-07T07:34:58.4915474Z * [new tag] trunk/040d00af048967dde7938d358d7f5988cbd18388 -> trunk/040d00af048967dde7938d358d7f5988cbd18388 2025-09-07T07:34:58.4916035Z * [new tag] trunk/0447f2d99b4351b2ff129dce6eebb371024f73e5 -> trunk/0447f2d99b4351b2ff129dce6eebb371024f73e5 2025-09-07T07:34:58.4916430Z * [new tag] trunk/047603d35bdc70046216384838d6340feab79bf4 -> trunk/047603d35bdc70046216384838d6340feab79bf4 2025-09-07T07:34:58.4917022Z * [new tag] trunk/06da7c0730b3764f178ec3a90dedf4ffa4202d81 -> trunk/06da7c0730b3764f178ec3a90dedf4ffa4202d81 2025-09-07T07:34:58.4919701Z * [new tag] trunk/081cab045472ce045634548cc6c14a4870641e23 -> trunk/081cab045472ce045634548cc6c14a4870641e23 2025-09-07T07:34:58.4920133Z * [new tag] trunk/09587daf8c9f21f5340f73921ce5f23d1a4a4572 -> trunk/09587daf8c9f21f5340f73921ce5f23d1a4a4572 2025-09-07T07:34:58.4920505Z * [new tag] trunk/09be1890d72cc34fc946965dc4a27736bf0ca8c6 -> trunk/09be1890d72cc34fc946965dc4a27736bf0ca8c6 2025-09-07T07:34:58.4920842Z * [new tag] trunk/09d2f1b6315d6d416fbf452793d65795863ebc66 -> trunk/09d2f1b6315d6d416fbf452793d65795863ebc66 2025-09-07T07:34:58.4921186Z * [new tag] trunk/0af70e2353e1dcda83175fd4834ecb7b63e009e0 -> trunk/0af70e2353e1dcda83175fd4834ecb7b63e009e0 2025-09-07T07:34:58.4921974Z * [new tag] trunk/0c0e056a9e20c17271a6144dd32c0c7e3ba26736 -> trunk/0c0e056a9e20c17271a6144dd32c0c7e3ba26736 2025-09-07T07:34:58.4922518Z * [new tag] trunk/0cd6c56bdfa9178ff61be82ce3b178926ddb64a9 -> trunk/0cd6c56bdfa9178ff61be82ce3b178926ddb64a9 2025-09-07T07:34:58.4922789Z * [new tag] trunk/0d421ace32c1605ee8e452ee1eeb03bd243dd96c -> trunk/0d421ace32c1605ee8e452ee1eeb03bd243dd96c 2025-09-07T07:34:58.4923558Z * [new tag] trunk/0d71a9dd5b4b6d1dde58d91c9b71d96bc6a6a171 -> trunk/0d71a9dd5b4b6d1dde58d91c9b71d96bc6a6a171 2025-09-07T07:34:58.4923976Z * [new tag] trunk/0d84ff3b78f55492d3d4708458c92d776274939e -> trunk/0d84ff3b78f55492d3d4708458c92d776274939e 2025-09-07T07:34:58.4924561Z * [new tag] trunk/0f45aaf4414048b17d720d0915ce221a8de8ec63 -> trunk/0f45aaf4414048b17d720d0915ce221a8de8ec63 2025-09-07T07:34:58.4925202Z * [new tag] trunk/0ff8eabf1387de5acd6712a03bda61f1a3dfa27f -> trunk/0ff8eabf1387de5acd6712a03bda61f1a3dfa27f 2025-09-07T07:34:58.4925715Z * [new tag] trunk/104f2680e03d13a4765ca69f905d8f16fc0c822f -> trunk/104f2680e03d13a4765ca69f905d8f16fc0c822f 2025-09-07T07:34:58.4926451Z * [new tag] trunk/12814701555d3e41dfcdf8f9273af5821e322df0 -> trunk/12814701555d3e41dfcdf8f9273af5821e322df0 2025-09-07T07:34:58.4927319Z * [new tag] trunk/13b65196db422bdb394cb482e208c61ed448898c -> trunk/13b65196db422bdb394cb482e208c61ed448898c 2025-09-07T07:34:58.4927822Z * [new tag] trunk/13d66e2a66eceed14b8a8f5a971087df4f688a46 -> trunk/13d66e2a66eceed14b8a8f5a971087df4f688a46 2025-09-07T07:34:58.4928418Z * [new tag] trunk/145a3a7bda15e3963a33eb1b54bba5d4a270b225 -> trunk/145a3a7bda15e3963a33eb1b54bba5d4a270b225 2025-09-07T07:34:58.4929099Z * [new tag] trunk/146371483318e17929daefd37c8e459d9d6d47bb -> trunk/146371483318e17929daefd37c8e459d9d6d47bb 2025-09-07T07:34:58.4929586Z * [new tag] trunk/15c77a8cfd341e74fd124b077492ef2bfa51b339 -> trunk/15c77a8cfd341e74fd124b077492ef2bfa51b339 2025-09-07T07:34:58.4930142Z * [new tag] trunk/17fa8eec4a1e32939ab4d364ee6e75487a79b654 -> trunk/17fa8eec4a1e32939ab4d364ee6e75487a79b654 2025-09-07T07:34:58.4934631Z * [new tag] trunk/190c391a28845a14df26abb228d26aa813efb20c -> trunk/190c391a28845a14df26abb228d26aa813efb20c 2025-09-07T07:34:58.4934924Z * [new tag] trunk/1a588ace4667bde1331fbd8ed957157dca5cee68 -> trunk/1a588ace4667bde1331fbd8ed957157dca5cee68 2025-09-07T07:34:58.4935157Z * [new tag] trunk/1aa7476885e8f6e7b0ec3a5b6383aad9d3f343e7 -> trunk/1aa7476885e8f6e7b0ec3a5b6383aad9d3f343e7 2025-09-07T07:34:58.4935391Z * [new tag] trunk/1aeb421c342c9e9607842f4c87cb46e8e816ee53 -> trunk/1aeb421c342c9e9607842f4c87cb46e8e816ee53 2025-09-07T07:34:58.4935617Z * [new tag] trunk/1c1b28d5b6a942fafe23b2f09302d93c25226d4a -> trunk/1c1b28d5b6a942fafe23b2f09302d93c25226d4a 2025-09-07T07:34:58.4935994Z * [new tag] trunk/1ebd70d0c0d562d3be9abdee2a21906584af7d99 -> trunk/1ebd70d0c0d562d3be9abdee2a21906584af7d99 2025-09-07T07:34:58.4936224Z * [new tag] trunk/1ec2c15914da4ef7bd926ed9aebc8671c75fe965 -> trunk/1ec2c15914da4ef7bd926ed9aebc8671c75fe965 2025-09-07T07:34:58.4936461Z * [new tag] trunk/1f51056bd64e73d1aa81321bc3c098575b1bc78a -> trunk/1f51056bd64e73d1aa81321bc3c098575b1bc78a 2025-09-07T07:34:58.4936685Z * [new tag] trunk/1f820de639c75a1562d3fb03f160439f853ae07b -> trunk/1f820de639c75a1562d3fb03f160439f853ae07b 2025-09-07T07:34:58.4936905Z * [new tag] trunk/204697f0e695d82894c5010fbec664c4391f90cc -> trunk/204697f0e695d82894c5010fbec664c4391f90cc 2025-09-07T07:34:58.4937437Z * [new tag] trunk/20629b1619fe636227d01fc85ba221daa7185a05 -> trunk/20629b1619fe636227d01fc85ba221daa7185a05 2025-09-07T07:34:58.4937913Z * [new tag] trunk/20b47acef845e9c4f71da9429a396d293f50ebe7 -> trunk/20b47acef845e9c4f71da9429a396d293f50ebe7 2025-09-07T07:34:58.4938277Z * [new tag] trunk/20bfb2539d7c5250379648eda35f80b8a7d642dd -> trunk/20bfb2539d7c5250379648eda35f80b8a7d642dd 2025-09-07T07:34:58.4938865Z * [new tag] trunk/21fae99c180d17def562797ea0fb154d8fdf88e3 -> trunk/21fae99c180d17def562797ea0fb154d8fdf88e3 2025-09-07T07:34:58.4939512Z * [new tag] trunk/248355faf53f9f7ba2fd0a367d59600c6d991e7f -> trunk/248355faf53f9f7ba2fd0a367d59600c6d991e7f 2025-09-07T07:34:58.4939948Z * [new tag] trunk/25f4aaed9ec26f39c13862323ff8582006473d23 -> trunk/25f4aaed9ec26f39c13862323ff8582006473d23 2025-09-07T07:34:58.4940739Z * [new tag] trunk/261a84a1764412f8e659c956e3f81997ec3de9d5 -> trunk/261a84a1764412f8e659c956e3f81997ec3de9d5 2025-09-07T07:34:58.4941219Z * [new tag] trunk/28f4ab0737937858730f29f5c4e601e109cf9d5f -> trunk/28f4ab0737937858730f29f5c4e601e109cf9d5f 2025-09-07T07:34:58.4941809Z * [new tag] trunk/291cd11f2d5df6f48d348cce0e4e762f274f4dc4 -> trunk/291cd11f2d5df6f48d348cce0e4e762f274f4dc4 2025-09-07T07:34:58.4942333Z * [new tag] trunk/29280864d941e6108ab57f7298f520c0cf9696e9 -> trunk/29280864d941e6108ab57f7298f520c0cf9696e9 2025-09-07T07:34:58.4944702Z * [new tag] trunk/2a45837e98c63cae9d1a2e2133a727b829e549d5 -> trunk/2a45837e98c63cae9d1a2e2133a727b829e549d5 2025-09-07T07:34:58.4945196Z * [new tag] trunk/2a5c0785e2f975697fd7bdf1411de6e03dcaa1ef -> trunk/2a5c0785e2f975697fd7bdf1411de6e03dcaa1ef 2025-09-07T07:34:58.4945500Z * [new tag] trunk/2b8a83901c58a0858ea9e4ce00055f48e6ed164c -> trunk/2b8a83901c58a0858ea9e4ce00055f48e6ed164c 2025-09-07T07:34:58.4945739Z * [new tag] trunk/2ba65472dd54488a86a50326ea990195fc6732d6 -> trunk/2ba65472dd54488a86a50326ea990195fc6732d6 2025-09-07T07:34:58.4946000Z * [new tag] trunk/2c03f0acc53ed13fe8ebfe809129f25996e009a0 -> trunk/2c03f0acc53ed13fe8ebfe809129f25996e009a0 2025-09-07T07:34:58.4951723Z * [new tag] trunk/2dd529df0092799f68ee7afcf52338276906706a -> trunk/2dd529df0092799f68ee7afcf52338276906706a 2025-09-07T07:34:58.4952185Z * [new tag] trunk/2f6b4b1ad3f82bb3bd984f6e65744ea339ffb8b5 -> trunk/2f6b4b1ad3f82bb3bd984f6e65744ea339ffb8b5 2025-09-07T07:34:58.4952617Z * [new tag] trunk/2fa0520a64ed8aa734a56c4d124958f0b5711ca8 -> trunk/2fa0520a64ed8aa734a56c4d124958f0b5711ca8 2025-09-07T07:34:58.4953017Z * [new tag] trunk/302df2ac5dc4222294c09d48804a2dddb8f4bad8 -> trunk/302df2ac5dc4222294c09d48804a2dddb8f4bad8 2025-09-07T07:34:58.4953459Z * [new tag] trunk/33028597bfa2e0178e28c8cce33cb9b3800cac43 -> trunk/33028597bfa2e0178e28c8cce33cb9b3800cac43 2025-09-07T07:34:58.4953846Z * [new tag] trunk/34aa78274d6770086025a967fa63a86830e08176 -> trunk/34aa78274d6770086025a967fa63a86830e08176 2025-09-07T07:34:58.4954522Z * [new tag] trunk/3559c354ce6a14d11fe29fb12fa2747a2f2af449 -> trunk/3559c354ce6a14d11fe29fb12fa2747a2f2af449 2025-09-07T07:34:58.4955379Z * [new tag] trunk/36d207fcaaede0d1e58a5168084c307b32b6fd8b -> trunk/36d207fcaaede0d1e58a5168084c307b32b6fd8b 2025-09-07T07:34:58.4955712Z * [new tag] trunk/377033757ae5ca524ea842f1b0a5f446ed3d8fe0 -> trunk/377033757ae5ca524ea842f1b0a5f446ed3d8fe0 2025-09-07T07:34:58.4956222Z * [new tag] trunk/3771380f83fcac154a7c89ad679311d8c4818287 -> trunk/3771380f83fcac154a7c89ad679311d8c4818287 2025-09-07T07:34:58.4956475Z * [new tag] trunk/3a207816cc569f78863d86c01f2a3d265350e39f -> trunk/3a207816cc569f78863d86c01f2a3d265350e39f 2025-09-07T07:34:58.4956736Z * [new tag] trunk/3a20a20e7065ec927fdd216d4da3b04f879b3c67 -> trunk/3a20a20e7065ec927fdd216d4da3b04f879b3c67 2025-09-07T07:34:58.4957078Z * [new tag] trunk/3bbc2e3e4f025523eaa5dbff220b3e96bca608d0 -> trunk/3bbc2e3e4f025523eaa5dbff220b3e96bca608d0 2025-09-07T07:34:58.4961007Z * [new tag] trunk/3c0ff1b569c45cfa6935ad8031a9d4cf1551aa3f -> trunk/3c0ff1b569c45cfa6935ad8031a9d4cf1551aa3f 2025-09-07T07:34:58.4961471Z * [new tag] trunk/3c45af079afc92a03b03ddf4f9198902ffcf30cf -> trunk/3c45af079afc92a03b03ddf4f9198902ffcf30cf 2025-09-07T07:34:58.4962435Z * [new tag] trunk/3dde5d7f9bf80dd6623a712bc429e9e4302464b5 -> trunk/3dde5d7f9bf80dd6623a712bc429e9e4302464b5 2025-09-07T07:34:58.4962743Z * [new tag] trunk/403a3a393cda7e60f503f3b04b8805a845dcf45d -> trunk/403a3a393cda7e60f503f3b04b8805a845dcf45d 2025-09-07T07:34:58.4963007Z * [new tag] trunk/420c52ecf36f86d32da0853bfbe074b682b070aa -> trunk/420c52ecf36f86d32da0853bfbe074b682b070aa 2025-09-07T07:34:58.4963257Z * [new tag] trunk/43b7c86a2c0f91320f5c5f4827b111edff06fdb6 -> trunk/43b7c86a2c0f91320f5c5f4827b111edff06fdb6 2025-09-07T07:34:58.4963499Z * [new tag] trunk/451ed931562ec8b46d1f7e6c266a68132a119336 -> trunk/451ed931562ec8b46d1f7e6c266a68132a119336 2025-09-07T07:34:58.4963750Z * [new tag] trunk/480c7391126656154318fabf1d57ebc01e196e63 -> trunk/480c7391126656154318fabf1d57ebc01e196e63 2025-09-07T07:34:58.4964007Z * [new tag] trunk/48bedd753da22634aa94fbafeb731e82025404f3 -> trunk/48bedd753da22634aa94fbafeb731e82025404f3 2025-09-07T07:34:58.4964253Z * [new tag] trunk/494878a11b79071ada0b98f34042d47155be6d1c -> trunk/494878a11b79071ada0b98f34042d47155be6d1c 2025-09-07T07:34:58.4964510Z * [new tag] trunk/4ae57d448c0a7d37e4cfd5c27d977fad2cef4051 -> trunk/4ae57d448c0a7d37e4cfd5c27d977fad2cef4051 2025-09-07T07:34:58.4964762Z * [new tag] trunk/4cdaf8265d86f984254b62052da8c26ef61ef1cf -> trunk/4cdaf8265d86f984254b62052da8c26ef61ef1cf 2025-09-07T07:34:58.4965012Z * [new tag] trunk/4d4abec80f03cd8fdefe1d9cb3a60d3690cd777e -> trunk/4d4abec80f03cd8fdefe1d9cb3a60d3690cd777e 2025-09-07T07:34:58.4965273Z * [new tag] trunk/4e42aa8ffc44b8340eb0eeaf80a2cafc4763a186 -> trunk/4e42aa8ffc44b8340eb0eeaf80a2cafc4763a186 2025-09-07T07:34:58.4965524Z * [new tag] trunk/4f72d932feee0749397fec876dcd43994f50b215 -> trunk/4f72d932feee0749397fec876dcd43994f50b215 2025-09-07T07:34:58.4965779Z * [new tag] trunk/50fc22dedf3c4a27be61fa05551c4f320281b42d -> trunk/50fc22dedf3c4a27be61fa05551c4f320281b42d 2025-09-07T07:34:58.4966035Z * [new tag] trunk/5211f1f908907ffc064b56e43cf8659f7fc22aa9 -> trunk/5211f1f908907ffc064b56e43cf8659f7fc22aa9 2025-09-07T07:34:58.4966279Z * [new tag] trunk/524b78d4f67045b83bb69edc56ab16efe282971c -> trunk/524b78d4f67045b83bb69edc56ab16efe282971c 2025-09-07T07:34:58.4966552Z * [new tag] trunk/54e275e0d81fe1e1ccfa4fb5f2a5a9aaca00ca15 -> trunk/54e275e0d81fe1e1ccfa4fb5f2a5a9aaca00ca15 2025-09-07T07:34:58.4966990Z * [new tag] trunk/5561e45758d59c94605873d5db48ed459c004c3b -> trunk/5561e45758d59c94605873d5db48ed459c004c3b 2025-09-07T07:34:58.4967321Z * [new tag] trunk/57278d45f046d4f89f45d373b1af4dd56934ff24 -> trunk/57278d45f046d4f89f45d373b1af4dd56934ff24 2025-09-07T07:34:58.4967576Z * [new tag] trunk/5927a70934ccf7b70182d364c23245a7dd685503 -> trunk/5927a70934ccf7b70182d364c23245a7dd685503 2025-09-07T07:34:58.4967836Z * [new tag] trunk/5985e28912aeb40b103ebfcf2fd0665eb4a50599 -> trunk/5985e28912aeb40b103ebfcf2fd0665eb4a50599 2025-09-07T07:34:58.4968070Z * [new tag] trunk/5a2da090ed6db88bb657c4e51ec0b310cd08bff6 -> trunk/5a2da090ed6db88bb657c4e51ec0b310cd08bff6 2025-09-07T07:34:58.4968306Z * [new tag] trunk/5c473e9f5ee0ef0fc38e6cf34a95b547f8cdc8d5 -> trunk/5c473e9f5ee0ef0fc38e6cf34a95b547f8cdc8d5 2025-09-07T07:34:58.4968532Z * [new tag] trunk/5c67426d6847667a7c55a2dd01f470fa37238c18 -> trunk/5c67426d6847667a7c55a2dd01f470fa37238c18 2025-09-07T07:34:58.4968760Z * [new tag] trunk/5da573c42c332bc68d4b7946c69f690a876d951a -> trunk/5da573c42c332bc68d4b7946c69f690a876d951a 2025-09-07T07:34:58.4969221Z * [new tag] trunk/5e5870e858f60ff4bf87d03f3592097e934a9580 -> trunk/5e5870e858f60ff4bf87d03f3592097e934a9580 2025-09-07T07:34:58.4969575Z * [new tag] trunk/5f3cbc9442aa55b5afb29f4ac8ca9be569003e84 -> trunk/5f3cbc9442aa55b5afb29f4ac8ca9be569003e84 2025-09-07T07:34:58.4969881Z * [new tag] trunk/600c25e9a17fe56e3dee872be8854db08916ba0c -> trunk/600c25e9a17fe56e3dee872be8854db08916ba0c 2025-09-07T07:34:58.4970214Z * [new tag] trunk/601ae8e4831fc8123fffcfb8fd2e6b6381b42e14 -> trunk/601ae8e4831fc8123fffcfb8fd2e6b6381b42e14 2025-09-07T07:34:58.4972556Z * [new tag] trunk/6087ef41e54c2494b117ffd923faf20f515a6806 -> trunk/6087ef41e54c2494b117ffd923faf20f515a6806 2025-09-07T07:34:58.4972855Z * [new tag] trunk/626cb7df8161dd4ecb4fe43b60f37ce9076f56b1 -> trunk/626cb7df8161dd4ecb4fe43b60f37ce9076f56b1 2025-09-07T07:34:58.4973129Z * [new tag] trunk/62c3f9a97fd3dea7132a93066d32d893ffe101e6 -> trunk/62c3f9a97fd3dea7132a93066d32d893ffe101e6 2025-09-07T07:34:58.4973386Z * [new tag] trunk/63a9c23fe99eacfd09610c36dfe8f01b053c1a35 -> trunk/63a9c23fe99eacfd09610c36dfe8f01b053c1a35 2025-09-07T07:34:58.4973621Z * [new tag] trunk/65985937d97505f648b6ed852c3129f2dd08b251 -> trunk/65985937d97505f648b6ed852c3129f2dd08b251 2025-09-07T07:34:58.4978928Z * [new tag] trunk/66f3b4a682a6153517dd23369fdc3289b6494b07 -> trunk/66f3b4a682a6153517dd23369fdc3289b6494b07 2025-09-07T07:34:58.4979331Z * [new tag] trunk/6737e2c996990024187ba620d2764f3b6f6add2c -> trunk/6737e2c996990024187ba620d2764f3b6f6add2c 2025-09-07T07:34:58.4979571Z * [new tag] trunk/67c31dcd364f10072a55f4a30ffd1151c686283a -> trunk/67c31dcd364f10072a55f4a30ffd1151c686283a 2025-09-07T07:34:58.4979792Z * [new tag] trunk/68738beff73e9c3512e18b4edea811a897ce42db -> trunk/68738beff73e9c3512e18b4edea811a897ce42db 2025-09-07T07:34:58.4980014Z * [new tag] trunk/69a25f68884a168550695fdb1a7c310c54d29536 -> trunk/69a25f68884a168550695fdb1a7c310c54d29536 2025-09-07T07:34:58.4980232Z * [new tag] trunk/6b1900c22f1a07b9519346898d4c71d8a2b0f12f -> trunk/6b1900c22f1a07b9519346898d4c71d8a2b0f12f 2025-09-07T07:34:58.4980446Z * [new tag] trunk/6b8b3ac4403f771bd4a8f9a45d93347304148774 -> trunk/6b8b3ac4403f771bd4a8f9a45d93347304148774 2025-09-07T07:34:58.4980706Z * [new tag] trunk/6f7608d603834d6068b2e7a5d59bec3973b6bb1b -> trunk/6f7608d603834d6068b2e7a5d59bec3973b6bb1b 2025-09-07T07:34:58.4981109Z * [new tag] trunk/70d36e047dfb3488fd6335016711a784d810ebda -> trunk/70d36e047dfb3488fd6335016711a784d810ebda 2025-09-07T07:34:58.4981333Z * [new tag] trunk/71992dd805ff9d6763f77214dfe8b0465e88c87b -> trunk/71992dd805ff9d6763f77214dfe8b0465e88c87b 2025-09-07T07:34:58.4981621Z * [new tag] trunk/734ce8eba9c69381f187359bf0fef1d71d84cd20 -> trunk/734ce8eba9c69381f187359bf0fef1d71d84cd20 2025-09-07T07:34:58.4981836Z * [new tag] trunk/73eb4511fb863a37944342b7e92aae706de603c8 -> trunk/73eb4511fb863a37944342b7e92aae706de603c8 2025-09-07T07:34:58.4982069Z * [new tag] trunk/75bc23cfc345bd4c05e7f97c416c4b3d2d1fa64b -> trunk/75bc23cfc345bd4c05e7f97c416c4b3d2d1fa64b 2025-09-07T07:34:58.4982285Z * [new tag] trunk/771f369448321a387f2018535bc8b8b6e5f12fab -> trunk/771f369448321a387f2018535bc8b8b6e5f12fab 2025-09-07T07:34:58.4982508Z * [new tag] trunk/789d4942127143f2adcb53612c058ce4c9a2cf20 -> trunk/789d4942127143f2adcb53612c058ce4c9a2cf20 2025-09-07T07:34:58.4982725Z * [new tag] trunk/791eff96c85678c950888f9da24650083ee673fe -> trunk/791eff96c85678c950888f9da24650083ee673fe 2025-09-07T07:34:58.4982943Z * [new tag] trunk/793fc12aff1f69fbbf9f4278182fb52bbe350fc9 -> trunk/793fc12aff1f69fbbf9f4278182fb52bbe350fc9 2025-09-07T07:34:58.4983170Z * [new tag] trunk/79fcd5247a9a129eee526a14df30bfc6a22b3f01 -> trunk/79fcd5247a9a129eee526a14df30bfc6a22b3f01 2025-09-07T07:34:58.4983476Z * [new tag] trunk/7f4ff79210eb06924f223ae3a1941ee0e2635348 -> trunk/7f4ff79210eb06924f223ae3a1941ee0e2635348 2025-09-07T07:34:58.4986483Z * [new tag] trunk/8076a185c85112be62be292eb47409c88a585b1c -> trunk/8076a185c85112be62be292eb47409c88a585b1c 2025-09-07T07:34:58.4991265Z * [new tag] trunk/80dd397f1979371a5583fa3d5c7352029522a78d -> trunk/80dd397f1979371a5583fa3d5c7352029522a78d 2025-09-07T07:34:58.4995386Z * [new tag] trunk/8171d6052ec12628eb67e0040839314056014429 -> trunk/8171d6052ec12628eb67e0040839314056014429 2025-09-07T07:34:58.4997501Z * [new tag] trunk/81aeefa657b7ccc26b275c50a9f33b2f056e8071 -> trunk/81aeefa657b7ccc26b275c50a9f33b2f056e8071 2025-09-07T07:34:58.4997876Z * [new tag] trunk/81b7b16618bda250ce55982894a83dc0805eb64c -> trunk/81b7b16618bda250ce55982894a83dc0805eb64c 2025-09-07T07:34:58.5002469Z * [new tag] trunk/827f0d405448de31f79d1089f7d7fceab2f87895 -> trunk/827f0d405448de31f79d1089f7d7fceab2f87895 2025-09-07T07:34:58.5002897Z * [new tag] trunk/82f63c8f6de63c30132a8ac299b6e8c2fd0d3fe8 -> trunk/82f63c8f6de63c30132a8ac299b6e8c2fd0d3fe8 2025-09-07T07:34:58.5003191Z * [new tag] trunk/850e1382a9c56bfde18af09d3e72352d775e9435 -> trunk/850e1382a9c56bfde18af09d3e72352d775e9435 2025-09-07T07:34:58.5003535Z * [new tag] trunk/8678d831c48e616b717bff50f2d03141d2e9f965 -> trunk/8678d831c48e616b717bff50f2d03141d2e9f965 2025-09-07T07:34:58.5003881Z * [new tag] trunk/869cbcc16e489a4f5a14a93d5779b0ea86061c60 -> trunk/869cbcc16e489a4f5a14a93d5779b0ea86061c60 2025-09-07T07:34:58.5004236Z * [new tag] trunk/8703debf669bc2238211bfd039f4ecdd8228b7f7 -> trunk/8703debf669bc2238211bfd039f4ecdd8228b7f7 2025-09-07T07:34:58.5004585Z * [new tag] trunk/874069fbe46e82da5cfa405e6c0deb12e89ff608 -> trunk/874069fbe46e82da5cfa405e6c0deb12e89ff608 2025-09-07T07:34:58.5005333Z * [new tag] trunk/8875d6e394da2fffd04f31b28bf258c94d4776a3 -> trunk/8875d6e394da2fffd04f31b28bf258c94d4776a3 2025-09-07T07:34:58.5005619Z * [new tag] trunk/88d94d17e8c5155451393afa6eb3bab48ab61c16 -> trunk/88d94d17e8c5155451393afa6eb3bab48ab61c16 2025-09-07T07:34:58.5005905Z * [new tag] trunk/890626632def7e0ef95a2d01e87a0e4627824a9f -> trunk/890626632def7e0ef95a2d01e87a0e4627824a9f 2025-09-07T07:34:58.5006150Z * [new tag] trunk/8975cda2520b7b1b5bc3b4d8213edf261fa82570 -> trunk/8975cda2520b7b1b5bc3b4d8213edf261fa82570 2025-09-07T07:34:58.5006397Z * [new tag] trunk/89d41d3f61d04f14730ec26f008a59bef6624610 -> trunk/89d41d3f61d04f14730ec26f008a59bef6624610 2025-09-07T07:34:58.5006637Z * [new tag] trunk/8bb213b6d599ef1273fe52f9b1f6d476056c3a41 -> trunk/8bb213b6d599ef1273fe52f9b1f6d476056c3a41 2025-09-07T07:34:58.5007250Z * [new tag] trunk/8e23a1227b5fb2e39afaa7d57c075a75b640a5af -> trunk/8e23a1227b5fb2e39afaa7d57c075a75b640a5af 2025-09-07T07:34:58.5007509Z * [new tag] trunk/8ec551bb354ab2b85fbbba9d461740a20366d248 -> trunk/8ec551bb354ab2b85fbbba9d461740a20366d248 2025-09-07T07:34:58.5007760Z * [new tag] trunk/8fd3c9ce919c8d5c645fd348bba517e948cbc29d -> trunk/8fd3c9ce919c8d5c645fd348bba517e948cbc29d 2025-09-07T07:34:58.5008006Z * [new tag] trunk/90f50f7e68e120d9574e6e3189e37b4280010ad9 -> trunk/90f50f7e68e120d9574e6e3189e37b4280010ad9 2025-09-07T07:34:58.5008253Z * [new tag] trunk/91f0bcf43fc0bc743350d491ac63b77e92054ac9 -> trunk/91f0bcf43fc0bc743350d491ac63b77e92054ac9 2025-09-07T07:34:58.5008500Z * [new tag] trunk/92576a594b8121f6b0b1b5a3ea16d08792fc68ab -> trunk/92576a594b8121f6b0b1b5a3ea16d08792fc68ab 2025-09-07T07:34:58.5008754Z * [new tag] trunk/92a43025e0baa1f2ce345f28d22913b518a1ab9d -> trunk/92a43025e0baa1f2ce345f28d22913b518a1ab9d 2025-09-07T07:34:58.5008989Z * [new tag] trunk/93fb23d6fae7c4e82c4239a1033e522088742634 -> trunk/93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:34:58.5009237Z * [new tag] trunk/9458d1ac3bd70c2af316a8ba95d2c6c9c1199c9c -> trunk/9458d1ac3bd70c2af316a8ba95d2c6c9c1199c9c 2025-09-07T07:34:58.5009511Z * [new tag] trunk/9480cdc0b61488c89a23c2f64f43b2dcedc8728e -> trunk/9480cdc0b61488c89a23c2f64f43b2dcedc8728e 2025-09-07T07:34:58.5009754Z * [new tag] trunk/9491d289b329e4ba4a9f5f5b1be7960671bb7840 -> trunk/9491d289b329e4ba4a9f5f5b1be7960671bb7840 2025-09-07T07:34:58.5009988Z * [new tag] trunk/9499c8761cd2067feb9877414e818f6fd00290f1 -> trunk/9499c8761cd2067feb9877414e818f6fd00290f1 2025-09-07T07:34:58.5010232Z * [new tag] trunk/95ee0bfea99d3d346d6502b91b497d2b35795504 -> trunk/95ee0bfea99d3d346d6502b91b497d2b35795504 2025-09-07T07:34:58.5010473Z * [new tag] trunk/98374612fc2febd686be20761e56bdc2424bc36a -> trunk/98374612fc2febd686be20761e56bdc2424bc36a 2025-09-07T07:34:58.5010720Z * [new tag] trunk/98efc9e93d8fc61eb53cb91378443617cb550500 -> trunk/98efc9e93d8fc61eb53cb91378443617cb550500 2025-09-07T07:34:58.5010969Z * [new tag] trunk/994f2a5dbcbdc915da39bf6f6ce4d1f5e74835c9 -> trunk/994f2a5dbcbdc915da39bf6f6ce4d1f5e74835c9 2025-09-07T07:34:58.5011200Z * [new tag] trunk/99f356fa58c8d726cef022d8710f5491291158f6 -> trunk/99f356fa58c8d726cef022d8710f5491291158f6 2025-09-07T07:34:58.5011444Z * [new tag] trunk/9a1c5c0a078b94d13ac5c1ae0d754d19fb73bf99 -> trunk/9a1c5c0a078b94d13ac5c1ae0d754d19fb73bf99 2025-09-07T07:34:58.5011677Z * [new tag] trunk/9a665ca3c472384e9d722bddba79e5a7680f1abd -> trunk/9a665ca3c472384e9d722bddba79e5a7680f1abd 2025-09-07T07:34:58.5011921Z * [new tag] trunk/9aedb3cd87b52160872173c177f61053d97bed57 -> trunk/9aedb3cd87b52160872173c177f61053d97bed57 2025-09-07T07:34:58.5012157Z * [new tag] trunk/9b81fe281da41f2421506339d26b027a468902f4 -> trunk/9b81fe281da41f2421506339d26b027a468902f4 2025-09-07T07:34:58.5012413Z * [new tag] trunk/9bdcee01f86e2969cff1140cdecfca13cb51816e -> trunk/9bdcee01f86e2969cff1140cdecfca13cb51816e 2025-09-07T07:34:58.5012657Z * [new tag] trunk/9c03d6be87eedc06e524e202e07a7e776551a839 -> trunk/9c03d6be87eedc06e524e202e07a7e776551a839 2025-09-07T07:34:58.5012892Z * [new tag] trunk/9c957723a0fedd9c637e63e023a613019e2cab60 -> trunk/9c957723a0fedd9c637e63e023a613019e2cab60 2025-09-07T07:34:58.5013129Z * [new tag] trunk/9e5247f51d81735e5f1e65e80588985fa93bccc5 -> trunk/9e5247f51d81735e5f1e65e80588985fa93bccc5 2025-09-07T07:34:58.5013370Z * [new tag] trunk/9eadb37cdd699f7e8e8177a5227bfeb16184ef26 -> trunk/9eadb37cdd699f7e8e8177a5227bfeb16184ef26 2025-09-07T07:34:58.5013675Z * [new tag] trunk/a00cdc1e4159db73c9ffb3f25e93e55877709a29 -> trunk/a00cdc1e4159db73c9ffb3f25e93e55877709a29 2025-09-07T07:34:58.5013909Z * [new tag] trunk/a02ee4a816d11380c6f564c1aba64d56af5ba705 -> trunk/a02ee4a816d11380c6f564c1aba64d56af5ba705 2025-09-07T07:34:58.5014145Z * [new tag] trunk/a3c7f77e50f900721817934120d60c2361b3c40d -> trunk/a3c7f77e50f900721817934120d60c2361b3c40d 2025-09-07T07:34:58.5014381Z * [new tag] trunk/a3d72b09ae12126a2b7d4a63a45ac100a882a802 -> trunk/a3d72b09ae12126a2b7d4a63a45ac100a882a802 2025-09-07T07:34:58.5014616Z * [new tag] trunk/a3e5466002791da609fcb069155d8ee347baee92 -> trunk/a3e5466002791da609fcb069155d8ee347baee92 2025-09-07T07:34:58.5014849Z * [new tag] trunk/a714437093ed196eee28f7de454cf4c41badc098 -> trunk/a714437093ed196eee28f7de454cf4c41badc098 2025-09-07T07:34:58.5015080Z * [new tag] trunk/a75e8cd27098f290de0b7439685d05ce02e91356 -> trunk/a75e8cd27098f290de0b7439685d05ce02e91356 2025-09-07T07:34:58.5015319Z * [new tag] trunk/a8d6943d36c1c2a5f90d3573460695bad4b623ae -> trunk/a8d6943d36c1c2a5f90d3573460695bad4b623ae 2025-09-07T07:34:58.5015560Z * [new tag] trunk/a918bbad6ab20649ff82eefb48417ecbe96bcb34 -> trunk/a918bbad6ab20649ff82eefb48417ecbe96bcb34 2025-09-07T07:34:58.5015847Z * [new tag] trunk/a99d8d39bc842d6ebc3e368b178e4884d24b056e -> trunk/a99d8d39bc842d6ebc3e368b178e4884d24b056e 2025-09-07T07:34:58.5016086Z * [new tag] trunk/aac1a50a191b4102d566c9c1ea22f06d6c2e3f02 -> trunk/aac1a50a191b4102d566c9c1ea22f06d6c2e3f02 2025-09-07T07:34:58.5016329Z * [new tag] trunk/aad96a202244c7d0d120c04ba8db593edd8c0f92 -> trunk/aad96a202244c7d0d120c04ba8db593edd8c0f92 2025-09-07T07:34:58.5016576Z * [new tag] trunk/ab643e4dbbaf7b663d4237514cbf01af9b11565c -> trunk/ab643e4dbbaf7b663d4237514cbf01af9b11565c 2025-09-07T07:34:58.5016830Z * [new tag] trunk/abc447174cd2cf8591edbc70a9f836f9a5779f47 -> trunk/abc447174cd2cf8591edbc70a9f836f9a5779f47 2025-09-07T07:34:58.5017076Z * [new tag] trunk/acece97c3a9dceb63194e314da93fdf37cf15a0d -> trunk/acece97c3a9dceb63194e314da93fdf37cf15a0d 2025-09-07T07:34:58.5017322Z * [new tag] trunk/adae7f66aacf3f248c3101b858cf98d5809119fa -> trunk/adae7f66aacf3f248c3101b858cf98d5809119fa 2025-09-07T07:34:58.5017553Z * [new tag] trunk/ae0edc133e61e3b16caf0b2ee0ff3f33ab72af4c -> trunk/ae0edc133e61e3b16caf0b2ee0ff3f33ab72af4c 2025-09-07T07:34:58.5017777Z * [new tag] trunk/aed33a8fcbd60b052d4559d261390c5797129c6d -> trunk/aed33a8fcbd60b052d4559d261390c5797129c6d 2025-09-07T07:34:58.5017994Z * [new tag] trunk/b04e922712080a3652e438d05e8bb74e0cd2d238 -> trunk/b04e922712080a3652e438d05e8bb74e0cd2d238 2025-09-07T07:34:58.5018268Z * [new tag] trunk/b0a3e58dd71c1a039ac0ef51e5bd8f704f632f6f -> trunk/b0a3e58dd71c1a039ac0ef51e5bd8f704f632f6f 2025-09-07T07:34:58.5024620Z * [new tag] trunk/b16d3f4c8c01d461c2f01064e9ca5fa2b33f5cf1 -> trunk/b16d3f4c8c01d461c2f01064e9ca5fa2b33f5cf1 2025-09-07T07:34:58.5026432Z * [new tag] trunk/b18bb6796f210a183e687d9d64984a5a9d13cf09 -> trunk/b18bb6796f210a183e687d9d64984a5a9d13cf09 2025-09-07T07:34:58.5026737Z * [new tag] trunk/b1bb98ddebdd3e41bf7987372409bdce96ae55de -> trunk/b1bb98ddebdd3e41bf7987372409bdce96ae55de 2025-09-07T07:34:58.5027098Z * [new tag] trunk/b2b4add0e754411372060e1d7b4057a66439172b -> trunk/b2b4add0e754411372060e1d7b4057a66439172b 2025-09-07T07:34:58.5027446Z * [new tag] trunk/b2c7b9ad2dc5a7c0b61febd307761bd5bc2f0f05 -> trunk/b2c7b9ad2dc5a7c0b61febd307761bd5bc2f0f05 2025-09-07T07:34:58.5027805Z * [new tag] trunk/b40d9432be44a6b5974ee62e7d19c3c61c5ece37 -> trunk/b40d9432be44a6b5974ee62e7d19c3c61c5ece37 2025-09-07T07:34:58.5028135Z * [new tag] trunk/b4ad38279b178b7bd14355123c1101e2e853e77b -> trunk/b4ad38279b178b7bd14355123c1101e2e853e77b 2025-09-07T07:34:58.5028643Z * [new tag] trunk/b67c41039835bd9b20b83cd6233e86baaa5f5dde -> trunk/b67c41039835bd9b20b83cd6233e86baaa5f5dde 2025-09-07T07:34:58.5029036Z * [new tag] trunk/b6d0a9ea9056ede4f7024dbf3bd6c43be3aff49c -> trunk/b6d0a9ea9056ede4f7024dbf3bd6c43be3aff49c 2025-09-07T07:34:58.5029275Z * [new tag] trunk/b7dad7dd49448c88d0751fa2e29c70afe985f734 -> trunk/b7dad7dd49448c88d0751fa2e29c70afe985f734 2025-09-07T07:34:58.5029496Z * [new tag] trunk/b7e207ca9f046ddd716076965a0cce403ba99052 -> trunk/b7e207ca9f046ddd716076965a0cce403ba99052 2025-09-07T07:34:58.5029728Z * [new tag] trunk/b919560c4a7010e2d89facee25586269a994746e -> trunk/b919560c4a7010e2d89facee25586269a994746e 2025-09-07T07:34:58.5029959Z * [new tag] trunk/b9ba612f7a968f7b27e121ca8f4d0a4d954f5354 -> trunk/b9ba612f7a968f7b27e121ca8f4d0a4d954f5354 2025-09-07T07:34:58.5030198Z * [new tag] trunk/ba7f546ccccb5e0b36d9070dc25f26a9647f89f8 -> trunk/ba7f546ccccb5e0b36d9070dc25f26a9647f89f8 2025-09-07T07:34:58.5030436Z * [new tag] trunk/bb950284c7e72905994bc25dd436c10e48088d85 -> trunk/bb950284c7e72905994bc25dd436c10e48088d85 2025-09-07T07:34:58.5030671Z * [new tag] trunk/bbedc71fd3267c639c38b4ec25eaa22f973d9c4d -> trunk/bbedc71fd3267c639c38b4ec25eaa22f973d9c4d 2025-09-07T07:34:58.5030958Z * [new tag] trunk/bc4db2c27fce6ff1648bdc5af31ec225d2a31f37 -> trunk/bc4db2c27fce6ff1648bdc5af31ec225d2a31f37 2025-09-07T07:34:58.5031584Z * [new tag] trunk/bc505977fb66677a09c31155c987330fbb18a865 -> trunk/bc505977fb66677a09c31155c987330fbb18a865 2025-09-07T07:34:58.5031843Z * [new tag] trunk/bd39e47feea7326afb5bbb67fcb1e69279239527 -> trunk/bd39e47feea7326afb5bbb67fcb1e69279239527 2025-09-07T07:34:58.5032071Z * [new tag] trunk/be5b03dde96638f25ffd732a4fed7e41b4cf40e1 -> trunk/be5b03dde96638f25ffd732a4fed7e41b4cf40e1 2025-09-07T07:34:58.5032314Z * [new tag] trunk/bffc7dd1f374d8408911cd22c6b3d6df39ded9b3 -> trunk/bffc7dd1f374d8408911cd22c6b3d6df39ded9b3 2025-09-07T07:34:58.5032672Z * [new tag] trunk/c024b1f5a18d5c5aee5cc2acdd4c52b24b93ffcf -> trunk/c024b1f5a18d5c5aee5cc2acdd4c52b24b93ffcf 2025-09-07T07:34:58.5032893Z * [new tag] trunk/c0983e6cc0acf71689e1851d12609e00b3f59371 -> trunk/c0983e6cc0acf71689e1851d12609e00b3f59371 2025-09-07T07:34:58.5033253Z * [new tag] trunk/c10195e723eeeedd099ed8b73eda7184ca618fad -> trunk/c10195e723eeeedd099ed8b73eda7184ca618fad 2025-09-07T07:34:58.5038022Z * [new tag] trunk/c157cf6488ade6a7ee2ce2d25b059e1335630a99 -> trunk/c157cf6488ade6a7ee2ce2d25b059e1335630a99 2025-09-07T07:34:58.5038403Z * [new tag] trunk/c2a30246172fd71d56529907ffd3c27b76b1f3a7 -> trunk/c2a30246172fd71d56529907ffd3c27b76b1f3a7 2025-09-07T07:34:58.5038749Z * [new tag] trunk/c32111149921b48bfef909293f1049e21619ed76 -> trunk/c32111149921b48bfef909293f1049e21619ed76 2025-09-07T07:34:58.5039057Z * [new tag] trunk/c37103234afc832dcad307e9016230810957c9d5 -> trunk/c37103234afc832dcad307e9016230810957c9d5 2025-09-07T07:34:58.5039386Z * [new tag] trunk/c3ceca2995cd35e1376c4b0704669bff1a81e836 -> trunk/c3ceca2995cd35e1376c4b0704669bff1a81e836 2025-09-07T07:34:58.5039733Z * [new tag] trunk/c3d54dea9febb1236d48d19e5d4876a63f2e20fd -> trunk/c3d54dea9febb1236d48d19e5d4876a63f2e20fd 2025-09-07T07:34:58.5040052Z * [new tag] trunk/c465b3d52c5687fe910d35a5c75341b77f821741 -> trunk/c465b3d52c5687fe910d35a5c75341b77f821741 2025-09-07T07:34:58.5040396Z * [new tag] trunk/c5b8a10be5e89396da916d1069ffcb7135f0372b -> trunk/c5b8a10be5e89396da916d1069ffcb7135f0372b 2025-09-07T07:34:58.5040733Z * [new tag] trunk/c7e41071a08f4045bc11ab60ec366d7357d56e30 -> trunk/c7e41071a08f4045bc11ab60ec366d7357d56e30 2025-09-07T07:34:58.5041273Z * [new tag] trunk/c98ddaca6d2e19ca37aff00c4ff0cda1e9a6ff65 -> trunk/c98ddaca6d2e19ca37aff00c4ff0cda1e9a6ff65 2025-09-07T07:34:58.5041633Z * [new tag] trunk/cb1e31362c7b53acf4ac95b9f8878064c184f03b -> trunk/cb1e31362c7b53acf4ac95b9f8878064c184f03b 2025-09-07T07:34:58.5041881Z * [new tag] trunk/cbfb005f7cce79974795b148e265f594f59477c8 -> trunk/cbfb005f7cce79974795b148e265f594f59477c8 2025-09-07T07:34:58.5042125Z * [new tag] trunk/cc5bdd12401bda835291d2f3cb297132ebdbf358 -> trunk/cc5bdd12401bda835291d2f3cb297132ebdbf358 2025-09-07T07:34:58.5042359Z * [new tag] trunk/cd529b686d54bbaa443f5b310140de48422d96c7 -> trunk/cd529b686d54bbaa443f5b310140de48422d96c7 2025-09-07T07:34:58.5042583Z * [new tag] trunk/cec0ff122815582af5302360aff03676558c5c87 -> trunk/cec0ff122815582af5302360aff03676558c5c87 2025-09-07T07:34:58.5042827Z * [new tag] trunk/d11720efdb563d02cf4f7d324311fb15a755268e -> trunk/d11720efdb563d02cf4f7d324311fb15a755268e 2025-09-07T07:34:58.5043063Z * [new tag] trunk/d1706d9128ae24d9048167e80d3fe5196d19035e -> trunk/d1706d9128ae24d9048167e80d3fe5196d19035e 2025-09-07T07:34:58.5043457Z * [new tag] trunk/d1a15abfdcaef138f2d9e93a9f46be44f30b766d -> trunk/d1a15abfdcaef138f2d9e93a9f46be44f30b766d 2025-09-07T07:34:58.5044476Z * [new tag] trunk/d232a95d4a79404ca05c1f52d37fde7339dcdf49 -> trunk/d232a95d4a79404ca05c1f52d37fde7339dcdf49 2025-09-07T07:34:58.5044780Z * [new tag] trunk/d2d4c8e9b2371c9aacfb771d9402ac7427b9778e -> trunk/d2d4c8e9b2371c9aacfb771d9402ac7427b9778e 2025-09-07T07:34:58.5045193Z * [new tag] trunk/d33840c542b387ab08ba49aa6c45aa9567fd9be7 -> trunk/d33840c542b387ab08ba49aa6c45aa9567fd9be7 2025-09-07T07:34:58.5045483Z * [new tag] trunk/d5643e8f3a648a99636bfa1f2a41d54bd3c0d0f1 -> trunk/d5643e8f3a648a99636bfa1f2a41d54bd3c0d0f1 2025-09-07T07:34:58.5045747Z * [new tag] trunk/d5b38410b5b6cf75c7a7389972777a6497926ee7 -> trunk/d5b38410b5b6cf75c7a7389972777a6497926ee7 2025-09-07T07:34:58.5046004Z * [new tag] trunk/d5e0f4202ba14632e4d14862ace096609e763462 -> trunk/d5e0f4202ba14632e4d14862ace096609e763462 2025-09-07T07:34:58.5046256Z * [new tag] trunk/d636c181f9140a7b59be10b36eae23039fc2bb72 -> trunk/d636c181f9140a7b59be10b36eae23039fc2bb72 2025-09-07T07:34:58.5046499Z * [new tag] trunk/d64718503728001a1e78168fd7f2d4ff23e57285 -> trunk/d64718503728001a1e78168fd7f2d4ff23e57285 2025-09-07T07:34:58.5048170Z * [new tag] trunk/d67c29ad22670320d676b02e394274af34e8e643 -> trunk/d67c29ad22670320d676b02e394274af34e8e643 2025-09-07T07:34:58.5048504Z * [new tag] trunk/d6b74568e2c98ce58ecc145b72ac66d4caf7ce95 -> trunk/d6b74568e2c98ce58ecc145b72ac66d4caf7ce95 2025-09-07T07:34:58.5059154Z * [new tag] trunk/d711f27845abd45007ccab6076649ebd896c2661 -> trunk/d711f27845abd45007ccab6076649ebd896c2661 2025-09-07T07:34:58.5061299Z * [new tag] trunk/d9d6dde0f42d4bcc8c97671ac50d5096c7e500ab -> trunk/d9d6dde0f42d4bcc8c97671ac50d5096c7e500ab 2025-09-07T07:34:58.5061596Z * [new tag] trunk/da4db4b33d1fdd046650cf19fdbac581a19bf2f9 -> trunk/da4db4b33d1fdd046650cf19fdbac581a19bf2f9 2025-09-07T07:34:58.5061860Z * [new tag] trunk/dac8a4b91c01c3bbc96f54e621b1ea4ffdbd29d1 -> trunk/dac8a4b91c01c3bbc96f54e621b1ea4ffdbd29d1 2025-09-07T07:34:58.5062109Z * [new tag] trunk/dbec08729fb9848bebed6048c63831b87170d061 -> trunk/dbec08729fb9848bebed6048c63831b87170d061 2025-09-07T07:34:58.5062336Z * [new tag] trunk/dcf385395d838f38c8dca25913578230dd43099a -> trunk/dcf385395d838f38c8dca25913578230dd43099a 2025-09-07T07:34:58.5062567Z * [new tag] trunk/dd2519abe83ec3c40d4797492434e41fe3b47e17 -> trunk/dd2519abe83ec3c40d4797492434e41fe3b47e17 2025-09-07T07:34:58.5063039Z * [new tag] trunk/dec72ea4b006dd0fbcaaaa106ad273d73807ab9d -> trunk/dec72ea4b006dd0fbcaaaa106ad273d73807ab9d 2025-09-07T07:34:58.5063262Z * [new tag] trunk/e0a62b266c021b910ce6dc02a6c9429210487717 -> trunk/e0a62b266c021b910ce6dc02a6c9429210487717 2025-09-07T07:34:58.5063491Z * [new tag] trunk/e19e02c84c9dcc408375e5cae3b0709c18b99228 -> trunk/e19e02c84c9dcc408375e5cae3b0709c18b99228 2025-09-07T07:34:58.5063722Z * [new tag] trunk/e304ea4e69d3a7deeb7e48c7450c214a4c953937 -> trunk/e304ea4e69d3a7deeb7e48c7450c214a4c953937 2025-09-07T07:34:58.5063953Z * [new tag] trunk/e3068cdb446adefb5a875616ba37a60235391439 -> trunk/e3068cdb446adefb5a875616ba37a60235391439 2025-09-07T07:34:58.5064179Z * [new tag] trunk/e381d4b0205d5f126c1de534f867ba776f7c3ee6 -> trunk/e381d4b0205d5f126c1de534f867ba776f7c3ee6 2025-09-07T07:34:58.5064419Z * [new tag] trunk/e4bd0ff4f8981b805df32ea5b3550621965ea4f2 -> trunk/e4bd0ff4f8981b805df32ea5b3550621965ea4f2 2025-09-07T07:34:58.5064653Z * [new tag] trunk/e532c9d4f1cdcbc1ea9628f55b9813e77847bdc7 -> trunk/e532c9d4f1cdcbc1ea9628f55b9813e77847bdc7 2025-09-07T07:34:58.5064872Z * [new tag] trunk/e92cd9415377403b6e90585e764639e2e0b5973b -> trunk/e92cd9415377403b6e90585e764639e2e0b5973b 2025-09-07T07:34:58.5065191Z * [new tag] trunk/e9481b6617b5576b099d8ca5798111592e9ad090 -> trunk/e9481b6617b5576b099d8ca5798111592e9ad090 2025-09-07T07:34:58.5065466Z * [new tag] trunk/ea1883dfd3e42defe37b11202b878bb76defa087 -> trunk/ea1883dfd3e42defe37b11202b878bb76defa087 2025-09-07T07:34:58.5065714Z * [new tag] trunk/eac3d6f04cfbbebe3d470dacd216da7d4b1f95a8 -> trunk/eac3d6f04cfbbebe3d470dacd216da7d4b1f95a8 2025-09-07T07:34:58.5065937Z * [new tag] trunk/eb18d32bda75189494d955aa001ade15f10333de -> trunk/eb18d32bda75189494d955aa001ade15f10333de 2025-09-07T07:34:58.5066174Z * [new tag] trunk/ef3be6726f7ff4b77c22db10cec5b686f9107ea9 -> trunk/ef3be6726f7ff4b77c22db10cec5b686f9107ea9 2025-09-07T07:34:58.5066420Z * [new tag] trunk/ef8aabd42422725026cb4dbf48aafa9efa226a04 -> trunk/ef8aabd42422725026cb4dbf48aafa9efa226a04 2025-09-07T07:34:58.5066651Z * [new tag] trunk/f00445b43eee57e20bb9316fa796ca23bf73373b -> trunk/f00445b43eee57e20bb9316fa796ca23bf73373b 2025-09-07T07:34:58.5066926Z * [new tag] trunk/f0c391102b754e3b145e8c59231d2df563487e37 -> trunk/f0c391102b754e3b145e8c59231d2df563487e37 2025-09-07T07:34:58.5067152Z * [new tag] trunk/f27985b7e796fb66a1b476284ba42d8cb360a751 -> trunk/f27985b7e796fb66a1b476284ba42d8cb360a751 2025-09-07T07:34:58.5067380Z * [new tag] trunk/f36f285953700f971552083a5da9d0ceacb63bbd -> trunk/f36f285953700f971552083a5da9d0ceacb63bbd 2025-09-07T07:34:58.5067609Z * [new tag] trunk/f3cebec39ebc110e1c8b06e741896585f7892dbb -> trunk/f3cebec39ebc110e1c8b06e741896585f7892dbb 2025-09-07T07:34:58.5067847Z * [new tag] trunk/f4c33cd44acac92c0b451a04da20ebe9370e5b0c -> trunk/f4c33cd44acac92c0b451a04da20ebe9370e5b0c 2025-09-07T07:34:58.5068066Z * [new tag] trunk/f612045ce105f008b2b675e2fc870163babeb2e8 -> trunk/f612045ce105f008b2b675e2fc870163babeb2e8 2025-09-07T07:34:58.5068298Z * [new tag] trunk/f8746b878dfc1e9639d42cbde832e9b9e792c86c -> trunk/f8746b878dfc1e9639d42cbde832e9b9e792c86c 2025-09-07T07:34:58.5072658Z * [new tag] trunk/f8ffa9194e26523e5f976d4a824d5cc58922727c -> trunk/f8ffa9194e26523e5f976d4a824d5cc58922727c 2025-09-07T07:34:58.5073096Z * [new tag] trunk/f981a7fa5230b98974291fdde32fe8488bc5d469 -> trunk/f981a7fa5230b98974291fdde32fe8488bc5d469 2025-09-07T07:34:58.5073503Z * [new tag] trunk/fbf3d2027daabbcb44d0af274b139be2a248a4f7 -> trunk/fbf3d2027daabbcb44d0af274b139be2a248a4f7 2025-09-07T07:34:58.5073879Z * [new tag] trunk/fca2601c9d628e1bd2d75c7318cd22c4e8c832aa -> trunk/fca2601c9d628e1bd2d75c7318cd22c4e8c832aa 2025-09-07T07:34:58.5074313Z * [new tag] trunk/fea20775ad96bdca972a1811d7d3372f368614ab -> trunk/fea20775ad96bdca972a1811d7d3372f368614ab 2025-09-07T07:34:58.5074596Z * [new tag] trunk/fefee081642f87419a21dc852f7167d4640443cd -> trunk/fefee081642f87419a21dc852f7167d4640443cd 2025-09-07T07:34:58.5074712Z * [new tag] v0.1.1 -> v0.1.1 2025-09-07T07:34:58.5074823Z * [new tag] v0.1.10 -> v0.1.10 2025-09-07T07:34:58.5074935Z * [new tag] v0.1.11 -> v0.1.11 2025-09-07T07:34:58.5075033Z * [new tag] v0.1.12 -> v0.1.12 2025-09-07T07:34:58.5075146Z * [new tag] v0.1.2 -> v0.1.2 2025-09-07T07:34:58.5075248Z * [new tag] v0.1.3 -> v0.1.3 2025-09-07T07:34:58.5075345Z * [new tag] v0.1.4 -> v0.1.4 2025-09-07T07:34:58.5075452Z * [new tag] v0.1.5 -> v0.1.5 2025-09-07T07:34:58.5075550Z * [new tag] v0.1.6 -> v0.1.6 2025-09-07T07:34:58.5075652Z * [new tag] v0.1.7 -> v0.1.7 2025-09-07T07:34:58.5075749Z * [new tag] v0.1.8 -> v0.1.8 2025-09-07T07:34:58.5075887Z * [new tag] v0.1.9 -> v0.1.9 2025-09-07T07:34:58.5075991Z * [new tag] v0.2.0 -> v0.2.0 2025-09-07T07:34:58.5076084Z * [new tag] v0.3.0 -> v0.3.0 2025-09-07T07:34:58.5076184Z * [new tag] v0.3.1 -> v0.3.1 2025-09-07T07:34:58.5076286Z * [new tag] v0.4.0 -> v0.4.0 2025-09-07T07:34:58.5076383Z * [new tag] v0.4.1 -> v0.4.1 2025-09-07T07:34:58.5076480Z * [new tag] v1.0.0 -> v1.0.0 2025-09-07T07:34:58.5076590Z * [new tag] v1.0.0a0 -> v1.0.0a0 2025-09-07T07:34:58.5076688Z * [new tag] v1.0.1 -> v1.0.1 2025-09-07T07:34:58.5076785Z * [new tag] v1.0rc0 -> v1.0rc0 2025-09-07T07:34:58.5076933Z * [new tag] v1.0rc1 -> v1.0rc1 2025-09-07T07:34:58.5077352Z * [new tag] v1.1.0 -> v1.1.0 2025-09-07T07:34:58.5081331Z * [new tag] v1.1.0a0 -> v1.1.0a0 2025-09-07T07:34:58.5081647Z * [new tag] v1.10.0 -> v1.10.0 2025-09-07T07:34:58.5081802Z * [new tag] v1.10.0-rc1 -> v1.10.0-rc1 2025-09-07T07:34:58.5081927Z * [new tag] v1.10.0-rc2 -> v1.10.0-rc2 2025-09-07T07:34:58.5082168Z * [new tag] v1.10.0-rc3 -> v1.10.0-rc3 2025-09-07T07:34:58.5082314Z * [new tag] v1.10.1 -> v1.10.1 2025-09-07T07:34:58.5082519Z * [new tag] v1.10.1-rc1 -> v1.10.1-rc1 2025-09-07T07:34:58.5083064Z * [new tag] v1.10.2 -> v1.10.2 2025-09-07T07:34:58.5083211Z * [new tag] v1.10.2-rc1 -> v1.10.2-rc1 2025-09-07T07:34:58.5083338Z * [new tag] v1.11.0 -> v1.11.0 2025-09-07T07:34:58.5083597Z * [new tag] v1.11.0-rc1 -> v1.11.0-rc1 2025-09-07T07:34:58.5084116Z * [new tag] v1.11.0-rc2 -> v1.11.0-rc2 2025-09-07T07:34:58.5084948Z * [new tag] v1.11.0-rc3 -> v1.11.0-rc3 2025-09-07T07:34:58.5085640Z * [new tag] v1.11.0-rc4 -> v1.11.0-rc4 2025-09-07T07:34:58.5086310Z * [new tag] v1.11.0-rc5 -> v1.11.0-rc5 2025-09-07T07:34:58.5086591Z * [new tag] v1.11.0-rc6 -> v1.11.0-rc6 2025-09-07T07:34:58.5087193Z * [new tag] v1.11.0-rc7 -> v1.11.0-rc7 2025-09-07T07:34:58.5087593Z * [new tag] v1.12.0 -> v1.12.0 2025-09-07T07:34:58.5091085Z * [new tag] v1.12.0-rc1 -> v1.12.0-rc1 2025-09-07T07:34:58.5091230Z * [new tag] v1.12.0-rc2 -> v1.12.0-rc2 2025-09-07T07:34:58.5091375Z * [new tag] v1.12.0-rc3 -> v1.12.0-rc3 2025-09-07T07:34:58.5091482Z * [new tag] v1.12.0-rc4 -> v1.12.0-rc4 2025-09-07T07:34:58.5091585Z * [new tag] v1.12.0-rc5 -> v1.12.0-rc5 2025-09-07T07:34:58.5091700Z * [new tag] v1.12.0-rc6 -> v1.12.0-rc6 2025-09-07T07:34:58.5091840Z * [new tag] v1.12.0-rc7 -> v1.12.0-rc7 2025-09-07T07:34:58.5092478Z * [new tag] v1.12.0-rc8 -> v1.12.0-rc8 2025-09-07T07:34:58.5092770Z * [new tag] v1.12.1 -> v1.12.1 2025-09-07T07:34:58.5097392Z * [new tag] v1.12.1-rc1 -> v1.12.1-rc1 2025-09-07T07:34:58.5097533Z * [new tag] v1.12.1-rc2 -> v1.12.1-rc2 2025-09-07T07:34:58.5097642Z * [new tag] v1.12.1-rc3 -> v1.12.1-rc3 2025-09-07T07:34:58.5097897Z * [new tag] v1.12.1-rc4 -> v1.12.1-rc4 2025-09-07T07:34:58.5098001Z * [new tag] v1.12.1-rc5 -> v1.12.1-rc5 2025-09-07T07:34:58.5098115Z * [new tag] v1.13.0 -> v1.13.0 2025-09-07T07:34:58.5102801Z * [new tag] v1.13.0-rc1 -> v1.13.0-rc1 2025-09-07T07:34:58.5102952Z * [new tag] v1.13.0-rc2 -> v1.13.0-rc2 2025-09-07T07:34:58.5103054Z * [new tag] v1.13.0-rc3 -> v1.13.0-rc3 2025-09-07T07:34:58.5103170Z * [new tag] v1.13.0-rc4 -> v1.13.0-rc4 2025-09-07T07:34:58.5103283Z * [new tag] v1.13.0-rc5 -> v1.13.0-rc5 2025-09-07T07:34:58.5103384Z * [new tag] v1.13.0-rc6 -> v1.13.0-rc6 2025-09-07T07:34:58.5108948Z * [new tag] v1.13.1 -> v1.13.1 2025-09-07T07:34:58.5109122Z * [new tag] v1.13.1-rc1 -> v1.13.1-rc1 2025-09-07T07:34:58.5109249Z * [new tag] v1.2.0 -> v1.2.0 2025-09-07T07:34:58.5109365Z * [new tag] v1.2.0a0 -> v1.2.0a0 2025-09-07T07:34:58.5109473Z * [new tag] v1.3.0 -> v1.3.0 2025-09-07T07:34:58.5109591Z * [new tag] v1.3.0a0 -> v1.3.0a0 2025-09-07T07:34:58.5109695Z * [new tag] v1.3.1 -> v1.3.1 2025-09-07T07:34:58.5109810Z * [new tag] v1.4.0 -> v1.4.0 2025-09-07T07:34:58.5109916Z * [new tag] v1.4.0a0 -> v1.4.0a0 2025-09-07T07:34:58.5110024Z * [new tag] v1.4.1 -> v1.4.1 2025-09-07T07:34:58.5110121Z * [new tag] v1.5.0 -> v1.5.0 2025-09-07T07:34:58.5110235Z * [new tag] v1.5.0-rc1 -> v1.5.0-rc1 2025-09-07T07:34:58.5110350Z * [new tag] v1.5.0-rc2 -> v1.5.0-rc2 2025-09-07T07:34:58.5110455Z * [new tag] v1.5.0-rc3 -> v1.5.0-rc3 2025-09-07T07:34:58.5110573Z * [new tag] v1.5.0-rc4 -> v1.5.0-rc4 2025-09-07T07:34:58.5110674Z * [new tag] v1.5.0-rc5 -> v1.5.0-rc5 2025-09-07T07:34:58.5110773Z * [new tag] v1.5.1 -> v1.5.1 2025-09-07T07:34:58.5111048Z * [new tag] v1.5.1-rc1 -> v1.5.1-rc1 2025-09-07T07:34:58.5111142Z * [new tag] v1.6.0 -> v1.6.0 2025-09-07T07:34:58.5111247Z * [new tag] v1.6.0-rc1 -> v1.6.0-rc1 2025-09-07T07:34:58.5116009Z * [new tag] v1.6.0-rc2 -> v1.6.0-rc2 2025-09-07T07:34:58.5116153Z * [new tag] v1.6.0-rc3 -> v1.6.0-rc3 2025-09-07T07:34:58.5116276Z * [new tag] v1.6.0-rc4 -> v1.6.0-rc4 2025-09-07T07:34:58.5116381Z * [new tag] v1.6.0-rc5 -> v1.6.0-rc5 2025-09-07T07:34:58.5116494Z * [new tag] v1.6.0-rc6 -> v1.6.0-rc6 2025-09-07T07:34:58.5116592Z * [new tag] v1.6.0-rc7 -> v1.6.0-rc7 2025-09-07T07:34:58.5116706Z * [new tag] v1.7.0 -> v1.7.0 2025-09-07T07:34:58.5116806Z * [new tag] v1.7.0-rc1 -> v1.7.0-rc1 2025-09-07T07:34:58.5116917Z * [new tag] v1.7.0-rc2 -> v1.7.0-rc2 2025-09-07T07:34:58.5117017Z * [new tag] v1.7.0-rc3 -> v1.7.0-rc3 2025-09-07T07:34:58.5117116Z * [new tag] v1.7.0-rc4 -> v1.7.0-rc4 2025-09-07T07:34:58.5121297Z * [new tag] v1.7.1 -> v1.7.1 2025-09-07T07:34:58.5121733Z * [new tag] v1.7.1-rc1 -> v1.7.1-rc1 2025-09-07T07:34:58.5121986Z * [new tag] v1.7.1-rc2 -> v1.7.1-rc2 2025-09-07T07:34:58.5122127Z * [new tag] v1.7.1-rc3 -> v1.7.1-rc3 2025-09-07T07:34:58.5122239Z * [new tag] v1.8.0 -> v1.8.0 2025-09-07T07:34:58.5122474Z * [new tag] v1.8.0-rc1 -> v1.8.0-rc1 2025-09-07T07:34:58.5122581Z * [new tag] v1.8.0-rc2 -> v1.8.0-rc2 2025-09-07T07:34:58.5122779Z * [new tag] v1.8.0-rc3 -> v1.8.0-rc3 2025-09-07T07:34:58.5122900Z * [new tag] v1.8.0-rc4 -> v1.8.0-rc4 2025-09-07T07:34:58.5123010Z * [new tag] v1.8.0-rc5 -> v1.8.0-rc5 2025-09-07T07:34:58.5123222Z * [new tag] v1.8.1 -> v1.8.1 2025-09-07T07:34:58.5123653Z * [new tag] v1.8.1-rc1 -> v1.8.1-rc1 2025-09-07T07:34:58.5123795Z * [new tag] v1.8.1-rc2 -> v1.8.1-rc2 2025-09-07T07:34:58.5123917Z * [new tag] v1.8.1-rc3 -> v1.8.1-rc3 2025-09-07T07:34:58.5124036Z * [new tag] v1.8.2 -> v1.8.2 2025-09-07T07:34:58.5124152Z * [new tag] v1.8.2-rc1 -> v1.8.2-rc1 2025-09-07T07:34:58.5124258Z * [new tag] v1.9.0 -> v1.9.0 2025-09-07T07:34:58.5124563Z * [new tag] v1.9.0-rc1 -> v1.9.0-rc1 2025-09-07T07:34:58.5124690Z * [new tag] v1.9.0-rc2 -> v1.9.0-rc2 2025-09-07T07:34:58.5126185Z * [new tag] v1.9.0-rc3 -> v1.9.0-rc3 2025-09-07T07:34:58.5126341Z * [new tag] v1.9.0-rc4 -> v1.9.0-rc4 2025-09-07T07:34:58.5126559Z * [new tag] v1.9.1 -> v1.9.1 2025-09-07T07:34:58.5127666Z * [new tag] v1.9.1-rc1 -> v1.9.1-rc1 2025-09-07T07:34:58.5127795Z * [new tag] v1.9.1-rc2 -> v1.9.1-rc2 2025-09-07T07:34:58.5132369Z * [new tag] v2.0.0 -> v2.0.0 2025-09-07T07:34:58.5132700Z * [new tag] v2.0.0-rc1 -> v2.0.0-rc1 2025-09-07T07:34:58.5132857Z * [new tag] v2.0.0-rc2 -> v2.0.0-rc2 2025-09-07T07:34:58.5132978Z * [new tag] v2.0.0-rc3 -> v2.0.0-rc3 2025-09-07T07:34:58.5133380Z * [new tag] v2.0.0-rc4 -> v2.0.0-rc4 2025-09-07T07:34:58.5133504Z * [new tag] v2.0.0-rc5 -> v2.0.0-rc5 2025-09-07T07:34:58.5133707Z * [new tag] v2.0.0-rc6 -> v2.0.0-rc6 2025-09-07T07:34:58.5134320Z * [new tag] v2.0.1 -> v2.0.1 2025-09-07T07:34:58.5134486Z * [new tag] v2.0.1-rc1 -> v2.0.1-rc1 2025-09-07T07:34:58.5137980Z * [new tag] v2.0.1-rc2 -> v2.0.1-rc2 2025-09-07T07:34:58.5138315Z * [new tag] v2.0.1-rc3 -> v2.0.1-rc3 2025-09-07T07:34:58.5138930Z * [new tag] v2.0.1-rc4 -> v2.0.1-rc4 2025-09-07T07:34:58.5139083Z * [new tag] v2.1.0 -> v2.1.0 2025-09-07T07:34:58.5139188Z * [new tag] v2.1.0-rc1 -> v2.1.0-rc1 2025-09-07T07:34:58.5139305Z * [new tag] v2.1.0-rc2 -> v2.1.0-rc2 2025-09-07T07:34:58.5139420Z * [new tag] v2.1.0-rc3 -> v2.1.0-rc3 2025-09-07T07:34:58.5139534Z * [new tag] v2.1.0-rc4 -> v2.1.0-rc4 2025-09-07T07:34:58.5139646Z * [new tag] v2.1.0-rc5 -> v2.1.0-rc5 2025-09-07T07:34:58.5139924Z * [new tag] v2.1.0-rc6 -> v2.1.0-rc6 2025-09-07T07:34:58.5142403Z * [new tag] v2.1.1 -> v2.1.1 2025-09-07T07:34:58.5142523Z * [new tag] v2.1.1-rc1 -> v2.1.1-rc1 2025-09-07T07:34:58.5142642Z * [new tag] v2.1.1-rc2 -> v2.1.1-rc2 2025-09-07T07:34:58.5142892Z * [new tag] v2.1.1-rc3 -> v2.1.1-rc3 2025-09-07T07:34:58.5143008Z * [new tag] v2.1.1-rc4 -> v2.1.1-rc4 2025-09-07T07:34:58.5143215Z * [new tag] v2.1.1-rc5 -> v2.1.1-rc5 2025-09-07T07:34:58.5143348Z * [new tag] v2.1.1-rc6 -> v2.1.1-rc6 2025-09-07T07:34:58.5143533Z * [new tag] v2.1.2 -> v2.1.2 2025-09-07T07:34:58.5143664Z * [new tag] v2.1.2-rc1 -> v2.1.2-rc1 2025-09-07T07:34:58.5147576Z * [new tag] v2.1.2-rc2 -> v2.1.2-rc2 2025-09-07T07:34:58.5147762Z * [new tag] v2.1.2-rc3 -> v2.1.2-rc3 2025-09-07T07:34:58.5147948Z * [new tag] v2.2.0 -> v2.2.0 2025-09-07T07:34:58.5148069Z * [new tag] v2.2.0-rc1 -> v2.2.0-rc1 2025-09-07T07:34:58.5148667Z * [new tag] v2.2.0-rc2 -> v2.2.0-rc2 2025-09-07T07:34:58.5148814Z * [new tag] v2.2.0-rc3 -> v2.2.0-rc3 2025-09-07T07:34:58.5159158Z * [new tag] v2.2.0-rc4 -> v2.2.0-rc4 2025-09-07T07:34:58.5159493Z * [new tag] v2.2.0-rc5 -> v2.2.0-rc5 2025-09-07T07:34:58.5159622Z * [new tag] v2.2.0-rc6 -> v2.2.0-rc6 2025-09-07T07:34:58.5159804Z * [new tag] v2.2.0-rc7 -> v2.2.0-rc7 2025-09-07T07:34:58.5159921Z * [new tag] v2.2.0-rc8 -> v2.2.0-rc8 2025-09-07T07:34:58.5161726Z * [new tag] v2.2.1 -> v2.2.1 2025-09-07T07:34:58.5161897Z * [new tag] v2.2.1-rc1 -> v2.2.1-rc1 2025-09-07T07:34:58.5162009Z * [new tag] v2.2.1-rc2 -> v2.2.1-rc2 2025-09-07T07:34:58.5162114Z * [new tag] v2.2.1-rc3 -> v2.2.1-rc3 2025-09-07T07:34:58.5162232Z * [new tag] v2.2.2 -> v2.2.2 2025-09-07T07:34:58.5162337Z * [new tag] v2.2.2-rc1 -> v2.2.2-rc1 2025-09-07T07:34:58.5162679Z * [new tag] v2.2.2-rc2 -> v2.2.2-rc2 2025-09-07T07:34:58.5162781Z * [new tag] v2.2.2-rc3 -> v2.2.2-rc3 2025-09-07T07:34:58.5162891Z * [new tag] v2.3.0 -> v2.3.0 2025-09-07T07:34:58.5162996Z * [new tag] v2.3.0-rc1 -> v2.3.0-rc1 2025-09-07T07:34:58.5163121Z * [new tag] v2.3.0-rc10 -> v2.3.0-rc10 2025-09-07T07:34:58.5163237Z * [new tag] v2.3.0-rc11 -> v2.3.0-rc11 2025-09-07T07:34:58.5163344Z * [new tag] v2.3.0-rc12 -> v2.3.0-rc12 2025-09-07T07:34:58.5163984Z * [new tag] v2.3.0-rc2 -> v2.3.0-rc2 2025-09-07T07:34:58.5164460Z * [new tag] v2.3.0-rc3 -> v2.3.0-rc3 2025-09-07T07:34:58.5165246Z * [new tag] v2.3.0-rc4 -> v2.3.0-rc4 2025-09-07T07:34:58.5165661Z * [new tag] v2.3.0-rc5 -> v2.3.0-rc5 2025-09-07T07:34:58.5166067Z * [new tag] v2.3.0-rc6 -> v2.3.0-rc6 2025-09-07T07:34:58.5167377Z * [new tag] v2.3.0-rc7 -> v2.3.0-rc7 2025-09-07T07:34:58.5167615Z * [new tag] v2.3.0-rc8 -> v2.3.0-rc8 2025-09-07T07:34:58.5167926Z * [new tag] v2.3.0-rc9 -> v2.3.0-rc9 2025-09-07T07:34:58.5168492Z * [new tag] v2.3.1 -> v2.3.1 2025-09-07T07:34:58.5169825Z * [new tag] v2.3.1-rc1 -> v2.3.1-rc1 2025-09-07T07:34:58.5169932Z * [new tag] v2.3.1-rc2 -> v2.3.1-rc2 2025-09-07T07:34:58.5170074Z * [new tag] v2.3.1-rc3 -> v2.3.1-rc3 2025-09-07T07:34:58.5171340Z * [new tag] v2.4.0 -> v2.4.0 2025-09-07T07:34:58.5171456Z * [new tag] v2.4.0-rc1 -> v2.4.0-rc1 2025-09-07T07:34:58.5172020Z * [new tag] v2.4.0-rc2 -> v2.4.0-rc2 2025-09-07T07:34:58.5172558Z * [new tag] v2.4.0-rc3 -> v2.4.0-rc3 2025-09-07T07:34:58.5173146Z * [new tag] v2.4.0-rc4 -> v2.4.0-rc4 2025-09-07T07:34:58.5176359Z * [new tag] v2.4.0-rc5 -> v2.4.0-rc5 2025-09-07T07:34:58.5176514Z * [new tag] v2.4.0-rc6 -> v2.4.0-rc6 2025-09-07T07:34:58.5176618Z * [new tag] v2.4.0-rc7 -> v2.4.0-rc7 2025-09-07T07:34:58.5176726Z * [new tag] v2.4.0-rc8 -> v2.4.0-rc8 2025-09-07T07:34:58.5176825Z * [new tag] v2.4.0-rc9 -> v2.4.0-rc9 2025-09-07T07:34:58.5176937Z * [new tag] v2.4.1 -> v2.4.1 2025-09-07T07:34:58.5177357Z * [new tag] v2.4.1-rc1 -> v2.4.1-rc1 2025-09-07T07:34:58.5177913Z * [new tag] v2.4.1-rc2 -> v2.4.1-rc2 2025-09-07T07:34:58.5178687Z * [new tag] v2.4.1-rc3 -> v2.4.1-rc3 2025-09-07T07:34:58.5178926Z * [new tag] v2.5.0 -> v2.5.0 2025-09-07T07:34:58.5180141Z * [new tag] v2.5.0-rc1 -> v2.5.0-rc1 2025-09-07T07:34:58.5180274Z * [new tag] v2.5.0-rc10 -> v2.5.0-rc10 2025-09-07T07:34:58.5180606Z * [new tag] v2.5.0-rc2 -> v2.5.0-rc2 2025-09-07T07:34:58.5183097Z * [new tag] v2.5.0-rc3 -> v2.5.0-rc3 2025-09-07T07:34:58.5183241Z * [new tag] v2.5.0-rc4 -> v2.5.0-rc4 2025-09-07T07:34:58.5183346Z * [new tag] v2.5.0-rc5 -> v2.5.0-rc5 2025-09-07T07:34:58.5183641Z * [new tag] v2.5.0-rc6 -> v2.5.0-rc6 2025-09-07T07:34:58.5184283Z * [new tag] v2.5.0-rc7 -> v2.5.0-rc7 2025-09-07T07:34:58.5185102Z * [new tag] v2.5.0-rc8 -> v2.5.0-rc8 2025-09-07T07:34:58.5185336Z * [new tag] v2.5.0-rc9 -> v2.5.0-rc9 2025-09-07T07:34:58.5185784Z * [new tag] v2.5.1 -> v2.5.1 2025-09-07T07:34:58.5186198Z * [new tag] v2.5.1-rc1 -> v2.5.1-rc1 2025-09-07T07:34:58.5186555Z * [new tag] v2.6.0 -> v2.6.0 2025-09-07T07:34:58.5189254Z * [new tag] v2.6.0-rc1 -> v2.6.0-rc1 2025-09-07T07:34:58.5189396Z * [new tag] v2.6.0-rc2 -> v2.6.0-rc2 2025-09-07T07:34:58.5189498Z * [new tag] v2.6.0-rc3 -> v2.6.0-rc3 2025-09-07T07:34:58.5189593Z * [new tag] v2.6.0-rc4 -> v2.6.0-rc4 2025-09-07T07:34:58.5189745Z * [new tag] v2.6.0-rc5 -> v2.6.0-rc5 2025-09-07T07:34:58.5190800Z * [new tag] v2.6.0-rc6 -> v2.6.0-rc6 2025-09-07T07:34:58.5191027Z * [new tag] v2.6.0-rc7 -> v2.6.0-rc7 2025-09-07T07:34:58.5192052Z * [new tag] v2.6.0-rc8 -> v2.6.0-rc8 2025-09-07T07:34:58.5192235Z * [new tag] v2.6.0-rc9 -> v2.6.0-rc9 2025-09-07T07:34:58.5194844Z * [new tag] v2.7.0 -> v2.7.0 2025-09-07T07:34:58.5194998Z * [new tag] v2.7.0-rc1 -> v2.7.0-rc1 2025-09-07T07:34:58.5195121Z * [new tag] v2.7.0-rc10 -> v2.7.0-rc10 2025-09-07T07:34:58.5195223Z * [new tag] v2.7.0-rc2 -> v2.7.0-rc2 2025-09-07T07:34:58.5195699Z * [new tag] v2.7.0-rc3 -> v2.7.0-rc3 2025-09-07T07:34:58.5196260Z * [new tag] v2.7.0-rc4 -> v2.7.0-rc4 2025-09-07T07:34:58.5196719Z * [new tag] v2.7.0-rc5 -> v2.7.0-rc5 2025-09-07T07:34:58.5202413Z * [new tag] v2.7.0-rc6 -> v2.7.0-rc6 2025-09-07T07:34:58.5202543Z * [new tag] v2.7.0-rc7 -> v2.7.0-rc7 2025-09-07T07:34:58.5202658Z * [new tag] v2.7.0-rc8 -> v2.7.0-rc8 2025-09-07T07:34:58.5202770Z * [new tag] v2.7.0-rc9 -> v2.7.0-rc9 2025-09-07T07:34:58.5202884Z * [new tag] v2.7.1 -> v2.7.1 2025-09-07T07:34:58.5202981Z * [new tag] v2.7.1-rc1 -> v2.7.1-rc1 2025-09-07T07:34:58.5203086Z * [new tag] v2.7.1-rc2 -> v2.7.1-rc2 2025-09-07T07:34:58.5203187Z * [new tag] v2.7.1-rc3 -> v2.7.1-rc3 2025-09-07T07:34:58.5203280Z * [new tag] v2.7.1-rc4 -> v2.7.1-rc4 2025-09-07T07:34:58.5203387Z * [new tag] v2.7.1-rc5 -> v2.7.1-rc5 2025-09-07T07:34:58.5203483Z * [new tag] v2.8.0 -> v2.8.0 2025-09-07T07:34:58.5203617Z * [new tag] v2.8.0-rc1 -> v2.8.0-rc1 2025-09-07T07:34:58.5204173Z * [new tag] v2.8.0-rc2 -> v2.8.0-rc2 2025-09-07T07:34:58.5204938Z * [new tag] v2.8.0-rc3 -> v2.8.0-rc3 2025-09-07T07:34:58.5205392Z * [new tag] v2.8.0-rc4 -> v2.8.0-rc4 2025-09-07T07:34:58.5206303Z * [new tag] v2.8.0-rc5 -> v2.8.0-rc5 2025-09-07T07:34:58.5206656Z * [new tag] v2.8.0-rc6 -> v2.8.0-rc6 2025-09-07T07:34:58.5207395Z * [new tag] v2.8.0-rc7 -> v2.8.0-rc7 2025-09-07T07:34:58.5211665Z * [new tag] v2.8.0-rc8 -> v2.8.0-rc8 2025-09-07T07:34:58.5211968Z * [new tag] whc_flight_1 -> whc_flight_1 2025-09-07T07:34:58.5212086Z * [new tag] whc_flight_2 -> whc_flight_2 2025-09-07T07:34:58.5212188Z * [new tag] whc_flight_4 -> whc_flight_4 2025-09-07T07:34:58.5671576Z [command]/usr/bin/git rev-parse --verify --quiet 93fb23d6fae7c4e82c4239a1033e522088742634^{object} 2025-09-07T07:34:58.5700230Z 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:34:58.5703386Z ##[endgroup] 2025-09-07T07:34:58.5703760Z ##[group]Determining the checkout info 2025-09-07T07:34:58.5704154Z ##[endgroup] 2025-09-07T07:34:58.5716967Z [command]/usr/bin/git sparse-checkout disable 2025-09-07T07:34:58.5766666Z [command]/usr/bin/git config --local --unset-all extensions.worktreeConfig 2025-09-07T07:34:58.5800519Z ##[group]Checking out the ref 2025-09-07T07:34:58.5800913Z [command]/usr/bin/git checkout --progress --force 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:34:59.6085442Z Updating files: 98% (19066/19405) 2025-09-07T07:34:59.6201361Z Updating files: 99% (19211/19405) 2025-09-07T07:34:59.6201841Z Updating files: 100% (19405/19405) 2025-09-07T07:34:59.6202164Z Updating files: 100% (19405/19405), done. 2025-09-07T07:34:59.6420320Z Note: switching to '93fb23d6fae7c4e82c4239a1033e522088742634'. 2025-09-07T07:34:59.6420614Z 2025-09-07T07:34:59.6420782Z You are in 'detached HEAD' state. You can look around, make experimental 2025-09-07T07:34:59.6421541Z changes and commit them, and you can discard any commits you make in this 2025-09-07T07:34:59.6421923Z state without impacting any branches by switching back to a branch. 2025-09-07T07:34:59.6422182Z 2025-09-07T07:34:59.6422336Z If you want to create a new branch to retain commits you create, you may 2025-09-07T07:34:59.6422688Z do so (now or later) by using -c with the switch command. Example: 2025-09-07T07:34:59.6422897Z 2025-09-07T07:34:59.6422991Z git switch -c 2025-09-07T07:34:59.6423133Z 2025-09-07T07:34:59.6423238Z Or undo this operation with: 2025-09-07T07:34:59.6423366Z 2025-09-07T07:34:59.6423445Z git switch - 2025-09-07T07:34:59.6423550Z 2025-09-07T07:34:59.6423720Z Turn off this advice by setting config variable advice.detachedHead to false 2025-09-07T07:34:59.6424006Z 2025-09-07T07:34:59.6424173Z HEAD is now at 93fb23d6fae Build vLLM nightly wheels (#162000) 2025-09-07T07:34:59.6471580Z ##[endgroup] 2025-09-07T07:34:59.6471965Z ##[group]Setting up auth for fetching submodules 2025-09-07T07:34:59.6479094Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2025-09-07T07:34:59.6536085Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2025-09-07T07:34:59.6562759Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2025-09-07T07:34:59.6596595Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2025-09-07T07:34:59.6619367Z ##[endgroup] 2025-09-07T07:34:59.6619894Z ##[group]Fetching submodules 2025-09-07T07:34:59.6626481Z [command]/usr/bin/git submodule sync --recursive 2025-09-07T07:34:59.6949567Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2025-09-07T07:34:59.7532406Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2025-09-07T07:34:59.7533092Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2025-09-07T07:34:59.7533675Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2025-09-07T07:34:59.7538059Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2025-09-07T07:34:59.7539727Z Submodule 'third_party/NVTX' (https://github.com/NVIDIA/NVTX.git) registered for path 'third_party/NVTX' 2025-09-07T07:34:59.7543001Z Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' 2025-09-07T07:34:59.7554648Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2025-09-07T07:34:59.7558327Z Submodule 'third_party/aiter' (https://github.com/ROCm/aiter.git) registered for path 'third_party/aiter' 2025-09-07T07:34:59.7558943Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2025-09-07T07:34:59.7564752Z Submodule 'third_party/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/composable_kernel' 2025-09-07T07:34:59.7567367Z Submodule 'third_party/cpp-httplib' (https://github.com/yhirose/cpp-httplib.git) registered for path 'third_party/cpp-httplib' 2025-09-07T07:34:59.7571855Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2025-09-07T07:34:59.7584939Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2025-09-07T07:34:59.7589980Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' 2025-09-07T07:34:59.7590784Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2025-09-07T07:34:59.7600044Z Submodule 'third_party/flash-attention' (https://github.com/Dao-AILab/flash-attention.git) registered for path 'third_party/flash-attention' 2025-09-07T07:34:59.7600753Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2025-09-07T07:34:59.7603291Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2025-09-07T07:34:59.7608159Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2025-09-07T07:34:59.7610369Z Submodule 'third_party/gloo' (https://github.com/pytorch/gloo) registered for path 'third_party/gloo' 2025-09-07T07:34:59.7618383Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2025-09-07T07:34:59.7623859Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2025-09-07T07:34:59.7624442Z Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' 2025-09-07T07:34:59.7630378Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2025-09-07T07:34:59.7635233Z Submodule 'third_party/kleidiai' (https://github.com/ARM-software/kleidiai.git) registered for path 'third_party/kleidiai' 2025-09-07T07:34:59.7637811Z Submodule 'third_party/mimalloc' (https://github.com/microsoft/mimalloc.git) registered for path 'third_party/mimalloc' 2025-09-07T07:34:59.7640436Z Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' 2025-09-07T07:34:59.7652369Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2025-09-07T07:34:59.7653308Z Submodule 'third_party/opentelemetry-cpp' (https://github.com/open-telemetry/opentelemetry-cpp.git) registered for path 'third_party/opentelemetry-cpp' 2025-09-07T07:34:59.7654499Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2025-09-07T07:34:59.7665547Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2025-09-07T07:34:59.7666606Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2025-09-07T07:34:59.7671606Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2025-09-07T07:34:59.7683151Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2025-09-07T07:34:59.7684627Z Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' 2025-09-07T07:34:59.7691799Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2025-09-07T07:34:59.7704784Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2025-09-07T07:34:59.7735830Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2025-09-07T07:34:59.9955336Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2025-09-07T07:34:59.9955864Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2025-09-07T07:34:59.9956352Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2025-09-07T07:34:59.9984011Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2025-09-07T07:35:01.3727677Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2025-09-07T07:35:01.3728779Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2025-09-07T07:35:01.3729551Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2025-09-07T07:35:01.3730330Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2025-09-07T07:35:01.3731132Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2025-09-07T07:35:01.3731940Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2025-09-07T07:35:01.3732714Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NVTX'... 2025-09-07T07:35:01.3733474Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ittapi'... 2025-09-07T07:35:01.3734281Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2025-09-07T07:35:01.3735119Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2025-09-07T07:35:01.3735914Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kleidiai'... 2025-09-07T07:35:01.3736677Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpp-httplib'... 2025-09-07T07:35:01.3737459Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2025-09-07T07:35:01.3738260Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention'... 2025-09-07T07:35:01.3739031Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2025-09-07T07:35:01.3739762Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2025-09-07T07:35:01.3740493Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/mimalloc'... 2025-09-07T07:35:01.3741247Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2025-09-07T07:35:01.3742005Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2025-09-07T07:35:01.3742771Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2025-09-07T07:35:01.3817534Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/VulkanMemoryAllocator'... 2025-09-07T07:35:01.5387968Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2025-09-07T07:35:01.6390087Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2025-09-07T07:35:13.1642025Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2025-09-07T07:35:13.1642556Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2025-09-07T07:35:13.1642985Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cutlass'... 2025-09-07T07:35:13.1643401Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2025-09-07T07:35:13.1643962Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/composable_kernel'... 2025-09-07T07:35:13.1644433Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/aiter'... 2025-09-07T07:35:13.1644911Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp'... 2025-09-07T07:35:13.1645594Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nlohmann'... 2025-09-07T07:35:13.1646042Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2025-09-07T07:35:13.1779678Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2025-09-07T07:35:13.1887771Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2025-09-07T07:35:13.1979108Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2025-09-07T07:35:13.2192810Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2025-09-07T07:35:13.2855103Z Submodule path 'third_party/NVTX': checked out '2942f167cc30c5e3a44a2aecd5b0d9c07ff61a07' 2025-09-07T07:35:13.3322088Z Submodule path 'third_party/VulkanMemoryAllocator': checked out '1d8f600fd424278486eade7ed3e877c99f0846b1' 2025-09-07T07:35:13.8663913Z Submodule path 'third_party/XNNPACK': checked out '51a0103656eff6fc9bfd39a4597923c4b542c883' 2025-09-07T07:35:13.9926697Z Submodule path 'third_party/aiter': checked out '01aae101b9e5e94d6c16a9514c9fb8df99c93150' 2025-09-07T07:35:13.9941969Z Submodule '3rdparty/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T07:35:13.9967464Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/aiter/3rdparty/composable_kernel'... 2025-09-07T07:35:17.8507967Z Submodule path 'third_party/aiter/3rdparty/composable_kernel': checked out 'cffe8fa2a442ac8e80dd236a1a5d24fe3d7e0cbf' 2025-09-07T07:35:17.8707063Z Submodule path 'third_party/benchmark': checked out '299e5928955cc62af9968370293b916f5130916f' 2025-09-07T07:35:18.1164249Z Submodule path 'third_party/composable_kernel': checked out '7fe50dc3da2069d6645d9deb8c017a876472a977' 2025-09-07T07:35:18.1560006Z Submodule path 'third_party/cpp-httplib': checked out '89c932f313c6437c38f2982869beacc89c2f2246' 2025-09-07T07:35:18.2424857Z Submodule path 'third_party/cpuinfo': checked out '5e3d2445e6a84d9599bee2bf78edbb4d80865e1d' 2025-09-07T07:35:18.2815786Z Submodule path 'third_party/cudnn_frontend': checked out 'f937055efc6d414d11f4c6577e3977fe74f35fb6' 2025-09-07T07:35:18.7947821Z Submodule path 'third_party/cutlass': checked out 'e51efbfe18fe4f4cbb66ab814c55bf4aa0185491' 2025-09-07T07:35:18.9112865Z Submodule path 'third_party/fbgemm': checked out '4b39c551efe15e6bbade20565b0ceb2d8ce3352d' 2025-09-07T07:35:18.9136510Z Submodule 'external/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/external/asmjit' 2025-09-07T07:35:18.9141934Z Submodule 'external/composable_kernel' (https://github.com/jwfromm/composable_kernel.git) registered for path 'third_party/fbgemm/external/composable_kernel' 2025-09-07T07:35:18.9142875Z Submodule 'external/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/external/cpuinfo' 2025-09-07T07:35:18.9143622Z Submodule 'external/cutlass' (https://github.com/jwfromm/cutlass) registered for path 'third_party/fbgemm/external/cutlass' 2025-09-07T07:35:18.9146945Z Submodule 'external/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/external/googletest' 2025-09-07T07:35:18.9148078Z Submodule 'external/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/external/hipify_torch' 2025-09-07T07:35:18.9153499Z Submodule 'external/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/fbgemm/external/json' 2025-09-07T07:35:18.9168953Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/asmjit'... 2025-09-07T07:35:20.1619631Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/hipify_torch'... 2025-09-07T07:35:20.1620285Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/cpuinfo'... 2025-09-07T07:35:20.1620832Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/googletest'... 2025-09-07T07:35:20.1621439Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/composable_kernel'... 2025-09-07T07:35:20.2622688Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/cutlass'... 2025-09-07T07:35:21.1693549Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/external/json'... 2025-09-07T07:35:25.3840930Z Submodule path 'third_party/fbgemm/external/asmjit': checked out 'a3199e8857792cd10b7589ff5d58343d2c9008ea' 2025-09-07T07:35:25.5763800Z Submodule path 'third_party/fbgemm/external/composable_kernel': checked out 'b1281b8b08d973a7064f864f47eeb30f3e2596e9' 2025-09-07T07:35:25.6650234Z Submodule path 'third_party/fbgemm/external/cpuinfo': checked out '6543fec09b2f04ac4a666882998b534afc9c1349' 2025-09-07T07:35:26.1775664Z Submodule path 'third_party/fbgemm/external/cutlass': checked out '311f3c8e51dc0eb56310cfc6980bf63d0fbd7917' 2025-09-07T07:35:26.2166717Z Submodule path 'third_party/fbgemm/external/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-09-07T07:35:26.2284036Z Submodule path 'third_party/fbgemm/external/hipify_torch': checked out '63b6a7b541fa7f08f8475ca7d74054db36ff2691' 2025-09-07T07:35:26.3153431Z Submodule path 'third_party/fbgemm/external/json': checked out '9cca280a4d0ccf0c08f47a99aa71d1b0e52f8d03' 2025-09-07T07:35:26.3729191Z Submodule path 'third_party/flash-attention': checked out '979702c87a8713a8e0a5e9fee122b90d2ef13be5' 2025-09-07T07:35:26.3757648Z Submodule 'csrc/composable_kernel' (https://github.com/ROCm/composable_kernel.git) registered for path 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T07:35:26.3763341Z Submodule 'csrc/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/flash-attention/csrc/cutlass' 2025-09-07T07:35:26.3784166Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/composable_kernel'... 2025-09-07T07:35:30.0330707Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flash-attention/csrc/cutlass'... 2025-09-07T07:35:30.2165544Z Submodule path 'third_party/flash-attention/csrc/composable_kernel': checked out '888317e698e9803c62bd38568abc9e05d7709f33' 2025-09-07T07:35:30.6944971Z Submodule path 'third_party/flash-attention/csrc/cutlass': checked out 'c506e16788cb08416a4a57e11a9067beeee29420' 2025-09-07T07:35:30.8027164Z Submodule path 'third_party/flatbuffers': checked out 'a2cd1ea3b6d3fee220106b5fed3f7ce8da9eb757' 2025-09-07T07:35:30.8325414Z Submodule path 'third_party/fmt': checked out '40626af88bd7df9a5fb80be7b25ac85b122d6c21' 2025-09-07T07:35:30.8656037Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2025-09-07T07:35:30.8873561Z Submodule path 'third_party/gloo': checked out 'c7b7b022c124d9643957d9bd55f57ac59fce8fa2' 2025-09-07T07:35:30.9252108Z Submodule path 'third_party/googletest': checked out '52eb8108c5bdec04579160ae17225d66034bd723' 2025-09-07T07:35:30.9376634Z Submodule path 'third_party/ideep': checked out '719d8e6cd7f7a0e01b155657526d693acf97c2b3' 2025-09-07T07:35:30.9389335Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2025-09-07T07:35:30.9416042Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2025-09-07T07:35:44.3987646Z Submodule path 'third_party/ideep/mkl-dnn': checked out '8d263e693366ef8db40acc569cc7d8edf644556d' 2025-09-07T07:35:44.4159212Z Submodule path 'third_party/ittapi': checked out 'dec1d23ca65ab069d225dfe40dea14f455170959' 2025-09-07T07:35:44.4995945Z Submodule path 'third_party/kineto': checked out '5e7501833f1021ce6f618572d3baf657b6319658' 2025-09-07T07:35:44.5012058Z Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T07:35:44.5013641Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T07:35:44.5016167Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T07:35:44.5043185Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... 2025-09-07T07:35:45.7922314Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2025-09-07T07:35:46.0020965Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2025-09-07T07:35:46.0729200Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' 2025-09-07T07:35:46.0746381Z Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T07:35:46.0747299Z Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T07:35:46.0748052Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T07:35:46.0749839Z Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T07:35:46.0750865Z Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T07:35:46.0756108Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T07:35:46.0757031Z Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T07:35:46.0762365Z Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T07:35:46.0787469Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... 2025-09-07T07:35:47.4593475Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... 2025-09-07T07:35:47.4594376Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... 2025-09-07T07:35:47.4595096Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... 2025-09-07T07:35:47.4595820Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... 2025-09-07T07:35:47.4596723Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... 2025-09-07T07:35:47.4597386Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... 2025-09-07T07:35:47.5592951Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... 2025-09-07T07:35:54.3668411Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2025-09-07T07:35:54.3818741Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2025-09-07T07:35:54.4135374Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2025-09-07T07:35:54.4260752Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2025-09-07T07:35:54.4276760Z Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T07:35:54.4306136Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... 2025-09-07T07:35:54.7664721Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2025-09-07T07:35:54.7837979Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2025-09-07T07:35:54.8203886Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' 2025-09-07T07:35:54.9067038Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2025-09-07T07:35:54.9217608Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2025-09-07T07:35:54.9563340Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '0041a40c1350ba702d475b9c4ad62da77caea164' 2025-09-07T07:35:55.0062212Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2025-09-07T07:35:55.0423604Z Submodule path 'third_party/kleidiai': checked out 'cca02c2f69dd18e1f12647c1c0bdc8cf90e680c7' 2025-09-07T07:35:55.0756352Z Submodule path 'third_party/mimalloc': checked out 'fbd8b99c2b828428947d70fdc046bb55609be93e' 2025-09-07T07:35:55.1695615Z Submodule path 'third_party/nlohmann': checked out '55f93686c01528224f448c19128836e7df245f72' 2025-09-07T07:35:55.4475759Z Submodule path 'third_party/onnx': checked out 'e709452ef2bbc1d113faf678c24e6d3467696e83' 2025-09-07T07:35:55.4502215Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2025-09-07T07:35:55.4535934Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2025-09-07T07:35:56.4739070Z Submodule path 'third_party/onnx/third_party/pybind11': checked out 'a2e59f0e7065404b44dfe92a28aca47ba1378dc4' 2025-09-07T07:35:56.5278424Z Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' 2025-09-07T07:35:56.5295146Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark) registered for path 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T07:35:56.5296031Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T07:35:56.5297193Z Submodule 'third_party/ms-gsl' (https://github.com/microsoft/GSL) registered for path 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T07:35:56.5297958Z Submodule 'third_party/nlohmann-json' (https://github.com/nlohmann/json) registered for path 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T07:35:56.5298947Z Submodule 'third_party/opentelemetry-proto' (https://github.com/open-telemetry/opentelemetry-proto) registered for path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T07:35:56.5301716Z Submodule 'third_party/opentracing-cpp' (https://github.com/opentracing/opentracing-cpp.git) registered for path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T07:35:56.5302684Z Submodule 'third_party/prometheus-cpp' (https://github.com/jupp0r/prometheus-cpp) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T07:35:56.5305270Z Submodule 'tools/vcpkg' (https://github.com/Microsoft/vcpkg) registered for path 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T07:35:56.5331328Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/benchmark'... 2025-09-07T07:35:56.9180312Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentracing-cpp'... 2025-09-07T07:35:56.9181558Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentelemetry-proto'... 2025-09-07T07:35:56.9182379Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp'... 2025-09-07T07:35:56.9183160Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/ms-gsl'... 2025-09-07T07:35:57.0176726Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/googletest'... 2025-09-07T07:35:57.5970413Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/nlohmann-json'... 2025-09-07T07:36:04.6840847Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/tools/vcpkg'... 2025-09-07T07:36:05.1445405Z Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' 2025-09-07T07:36:05.1801556Z Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' 2025-09-07T07:36:05.1954393Z Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' 2025-09-07T07:36:05.2837640Z Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' 2025-09-07T07:36:05.2961229Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' 2025-09-07T07:36:05.3085088Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' 2025-09-07T07:36:05.3218940Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' 2025-09-07T07:36:05.3235600Z Submodule 'civetweb' (https://github.com/civetweb/civetweb.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T07:36:05.3236546Z Submodule 'googletest' (https://github.com/google/googletest.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T07:36:05.3262498Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb'... 2025-09-07T07:36:07.0356992Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest'... 2025-09-07T07:36:07.2480616Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' 2025-09-07T07:36:07.2872703Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2025-09-07T07:36:07.6151733Z Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' 2025-09-07T07:36:07.6269136Z Submodule path 'third_party/pocketfft': checked out '0fa0ef591e38c2758e3184c6c23e497b9f732ffa' 2025-09-07T07:36:07.8486237Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2025-09-07T07:36:07.8509701Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2025-09-07T07:36:07.8510746Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2025-09-07T07:36:07.8532620Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2025-09-07T07:36:08.3746239Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2025-09-07T07:36:08.8047088Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2025-09-07T07:36:08.8662469Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2025-09-07T07:36:08.8757200Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2025-09-07T07:36:08.8878927Z Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' 2025-09-07T07:36:08.9207921Z Submodule path 'third_party/pybind11': checked out 'f5fbe867d2d26e4a0a9177a51f6e568868ad3dc8' 2025-09-07T07:36:08.9458290Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2025-09-07T07:36:08.9836053Z Submodule path 'third_party/sleef': checked out '5a1d179df9cf652951b59010a2d2075372d67f68' 2025-09-07T07:36:09.0067579Z Submodule path 'third_party/tensorpipe': checked out 'af0118d13e52f5a08841464a768e01a0bf3e3075' 2025-09-07T07:36:09.0081189Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2025-09-07T07:36:09.0082674Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2025-09-07T07:36:09.0083464Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2025-09-07T07:36:09.0085786Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T07:36:09.0115654Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2025-09-07T07:36:09.9375279Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2025-09-07T07:36:09.9702298Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2025-09-07T07:36:10.2690363Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2025-09-07T07:36:10.3194700Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2025-09-07T07:36:10.3325891Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2025-09-07T07:36:10.3960109Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '5152db2cbfeb5582e9c27c5ea1dba2cd9e10759b' 2025-09-07T07:36:10.4219648Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2025-09-07T07:36:10.4235546Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T07:36:10.4262132Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2025-09-07T07:36:10.6250996Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2025-09-07T07:36:10.6291790Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2025-09-07T07:36:10.6614742Z Entering 'android/libs/fbjni' 2025-09-07T07:36:10.6656211Z Entering 'third_party/FP16' 2025-09-07T07:36:10.6699189Z Entering 'third_party/FXdiv' 2025-09-07T07:36:10.6737902Z Entering 'third_party/NNPACK' 2025-09-07T07:36:10.6781465Z Entering 'third_party/NVTX' 2025-09-07T07:36:10.6823633Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T07:36:10.6863212Z Entering 'third_party/XNNPACK' 2025-09-07T07:36:10.6913835Z Entering 'third_party/aiter' 2025-09-07T07:36:10.6959888Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T07:36:10.7004209Z Entering 'third_party/benchmark' 2025-09-07T07:36:10.7042059Z Entering 'third_party/composable_kernel' 2025-09-07T07:36:10.7091667Z Entering 'third_party/cpp-httplib' 2025-09-07T07:36:10.7129110Z Entering 'third_party/cpuinfo' 2025-09-07T07:36:10.7167298Z Entering 'third_party/cudnn_frontend' 2025-09-07T07:36:10.7207256Z Entering 'third_party/cutlass' 2025-09-07T07:36:10.7255330Z Entering 'third_party/fbgemm' 2025-09-07T07:36:10.7295644Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T07:36:10.7330691Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T07:36:10.7377538Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T07:36:10.7415865Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T07:36:10.7465029Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T07:36:10.7504613Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T07:36:10.7540978Z Entering 'third_party/fbgemm/external/json' 2025-09-07T07:36:10.7584413Z Entering 'third_party/flash-attention' 2025-09-07T07:36:10.7624843Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T07:36:10.7668464Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T07:36:10.7710442Z Entering 'third_party/flatbuffers' 2025-09-07T07:36:10.7750335Z Entering 'third_party/fmt' 2025-09-07T07:36:10.7792236Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T07:36:10.7834607Z Entering 'third_party/gloo' 2025-09-07T07:36:10.7866034Z Entering 'third_party/googletest' 2025-09-07T07:36:10.7904856Z Entering 'third_party/ideep' 2025-09-07T07:36:10.7941420Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T07:36:10.7989942Z Entering 'third_party/ittapi' 2025-09-07T07:36:10.8026165Z Entering 'third_party/kineto' 2025-09-07T07:36:10.8068137Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T07:36:10.8102260Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T07:36:10.8142295Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T07:36:10.8183701Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T07:36:10.8220096Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T07:36:10.8258321Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T07:36:10.8305903Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T07:36:10.8339695Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T07:36:10.8392617Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T07:36:10.8424999Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T07:36:10.8466132Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T07:36:10.8511046Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T07:36:10.8543760Z Entering 'third_party/kleidiai' 2025-09-07T07:36:10.8589887Z Entering 'third_party/mimalloc' 2025-09-07T07:36:10.8627942Z Entering 'third_party/nlohmann' 2025-09-07T07:36:10.8668814Z Entering 'third_party/onnx' 2025-09-07T07:36:10.8718659Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T07:36:10.8764906Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T07:36:10.8801162Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T07:36:10.8839284Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T07:36:10.8880302Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T07:36:10.8920556Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T07:36:10.8964718Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T07:36:10.9000861Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T07:36:10.9040036Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T07:36:10.9084502Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T07:36:10.9124571Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T07:36:10.9173290Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T07:36:10.9228663Z Entering 'third_party/pocketfft' 2025-09-07T07:36:10.9270769Z Entering 'third_party/protobuf' 2025-09-07T07:36:10.9315372Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T07:36:10.9359122Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T07:36:10.9411323Z Entering 'third_party/psimd' 2025-09-07T07:36:10.9451262Z Entering 'third_party/pthreadpool' 2025-09-07T07:36:10.9489213Z Entering 'third_party/pybind11' 2025-09-07T07:36:10.9526081Z Entering 'third_party/python-peachpy' 2025-09-07T07:36:10.9564398Z Entering 'third_party/sleef' 2025-09-07T07:36:10.9601850Z Entering 'third_party/tensorpipe' 2025-09-07T07:36:10.9640357Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T07:36:10.9676952Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T07:36:10.9717403Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T07:36:10.9762415Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T07:36:10.9796093Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T07:36:10.9851627Z ##[endgroup] 2025-09-07T07:36:10.9852192Z ##[group]Persisting credentials for submodules 2025-09-07T07:36:10.9862087Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2025-09-07T07:36:11.0162761Z Entering 'android/libs/fbjni' 2025-09-07T07:36:11.0220488Z Entering 'third_party/FP16' 2025-09-07T07:36:11.0278933Z Entering 'third_party/FXdiv' 2025-09-07T07:36:11.0330689Z Entering 'third_party/NNPACK' 2025-09-07T07:36:11.0386739Z Entering 'third_party/NVTX' 2025-09-07T07:36:11.0437190Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T07:36:11.0487800Z Entering 'third_party/XNNPACK' 2025-09-07T07:36:11.0557986Z Entering 'third_party/aiter' 2025-09-07T07:36:11.0623383Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T07:36:11.0682418Z Entering 'third_party/benchmark' 2025-09-07T07:36:11.0736008Z Entering 'third_party/composable_kernel' 2025-09-07T07:36:11.0800245Z Entering 'third_party/cpp-httplib' 2025-09-07T07:36:11.0852035Z Entering 'third_party/cpuinfo' 2025-09-07T07:36:11.0910343Z Entering 'third_party/cudnn_frontend' 2025-09-07T07:36:11.0960490Z Entering 'third_party/cutlass' 2025-09-07T07:36:11.1014233Z Entering 'third_party/fbgemm' 2025-09-07T07:36:11.1069485Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T07:36:11.1117426Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T07:36:11.1180831Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T07:36:11.1235252Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T07:36:11.1301372Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T07:36:11.1349113Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T07:36:11.1399120Z Entering 'third_party/fbgemm/external/json' 2025-09-07T07:36:11.1463881Z Entering 'third_party/flash-attention' 2025-09-07T07:36:11.1519793Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T07:36:11.1575649Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T07:36:11.1642878Z Entering 'third_party/flatbuffers' 2025-09-07T07:36:11.1700530Z Entering 'third_party/fmt' 2025-09-07T07:36:11.1755542Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T07:36:11.1807485Z Entering 'third_party/gloo' 2025-09-07T07:36:11.1865901Z Entering 'third_party/googletest' 2025-09-07T07:36:11.1920616Z Entering 'third_party/ideep' 2025-09-07T07:36:11.1972949Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T07:36:11.2041410Z Entering 'third_party/ittapi' 2025-09-07T07:36:11.2099980Z Entering 'third_party/kineto' 2025-09-07T07:36:11.2152380Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T07:36:11.2212607Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T07:36:11.2267942Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T07:36:11.2324791Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T07:36:11.2382811Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T07:36:11.2432290Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T07:36:11.2487304Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T07:36:11.2540633Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T07:36:11.2601896Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T07:36:11.2654801Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T07:36:11.2708084Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T07:36:11.2761238Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T07:36:11.2818084Z Entering 'third_party/kleidiai' 2025-09-07T07:36:11.2878375Z Entering 'third_party/mimalloc' 2025-09-07T07:36:11.2927632Z Entering 'third_party/nlohmann' 2025-09-07T07:36:11.2990981Z Entering 'third_party/onnx' 2025-09-07T07:36:11.3060464Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T07:36:11.3120074Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T07:36:11.3176631Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T07:36:11.3228819Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T07:36:11.3281818Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T07:36:11.3338277Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T07:36:11.3399423Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T07:36:11.3447886Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T07:36:11.3505638Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T07:36:11.3555450Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T07:36:11.3605785Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T07:36:11.3662818Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T07:36:11.3731033Z Entering 'third_party/pocketfft' 2025-09-07T07:36:11.3788620Z Entering 'third_party/protobuf' 2025-09-07T07:36:11.3843958Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T07:36:11.3892832Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T07:36:11.3952757Z Entering 'third_party/psimd' 2025-09-07T07:36:11.4008512Z Entering 'third_party/pthreadpool' 2025-09-07T07:36:11.4061374Z Entering 'third_party/pybind11' 2025-09-07T07:36:11.4115773Z Entering 'third_party/python-peachpy' 2025-09-07T07:36:11.4171936Z Entering 'third_party/sleef' 2025-09-07T07:36:11.4225901Z Entering 'third_party/tensorpipe' 2025-09-07T07:36:11.4278219Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T07:36:11.4327832Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T07:36:11.4383244Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T07:36:11.4433605Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T07:36:11.4484006Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T07:36:11.4558613Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2025-09-07T07:36:11.4859665Z Entering 'android/libs/fbjni' 2025-09-07T07:36:11.4901927Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2025-09-07T07:36:11.4922653Z Entering 'third_party/FP16' 2025-09-07T07:36:11.4972548Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2025-09-07T07:36:11.4995088Z Entering 'third_party/FXdiv' 2025-09-07T07:36:11.5040677Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2025-09-07T07:36:11.5061512Z Entering 'third_party/NNPACK' 2025-09-07T07:36:11.5105386Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2025-09-07T07:36:11.5121580Z Entering 'third_party/NVTX' 2025-09-07T07:36:11.5173503Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NVTX/config remote.origin.url 2025-09-07T07:36:11.5190002Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T07:36:11.5238034Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2025-09-07T07:36:11.5259142Z Entering 'third_party/XNNPACK' 2025-09-07T07:36:11.5313890Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2025-09-07T07:36:11.5337237Z Entering 'third_party/aiter' 2025-09-07T07:36:11.5389655Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/config remote.origin.url 2025-09-07T07:36:11.5403935Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T07:36:11.5457844Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/aiter/modules/3rdparty/composable_kernel/config remote.origin.url 2025-09-07T07:36:11.5483802Z Entering 'third_party/benchmark' 2025-09-07T07:36:11.5532152Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2025-09-07T07:36:11.5550123Z Entering 'third_party/composable_kernel' 2025-09-07T07:36:11.5599063Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/composable_kernel/config remote.origin.url 2025-09-07T07:36:11.5629684Z Entering 'third_party/cpp-httplib' 2025-09-07T07:36:11.5682006Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2025-09-07T07:36:11.5694652Z Entering 'third_party/cpuinfo' 2025-09-07T07:36:11.5741177Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2025-09-07T07:36:11.5759850Z Entering 'third_party/cudnn_frontend' 2025-09-07T07:36:11.5802697Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2025-09-07T07:36:11.5819017Z Entering 'third_party/cutlass' 2025-09-07T07:36:11.5869470Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2025-09-07T07:36:11.5892108Z Entering 'third_party/fbgemm' 2025-09-07T07:36:11.5939869Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2025-09-07T07:36:11.5954883Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T07:36:11.6001938Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/asmjit/config remote.origin.url 2025-09-07T07:36:11.6023794Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T07:36:11.6075512Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/composable_kernel/config remote.origin.url 2025-09-07T07:36:11.6101334Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T07:36:11.6150871Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cpuinfo/config remote.origin.url 2025-09-07T07:36:11.6168056Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T07:36:11.6217824Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/cutlass/config remote.origin.url 2025-09-07T07:36:11.6240883Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T07:36:11.6287791Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/googletest/config remote.origin.url 2025-09-07T07:36:11.6305284Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T07:36:11.6355220Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/hipify_torch/config remote.origin.url 2025-09-07T07:36:11.6368257Z Entering 'third_party/fbgemm/external/json' 2025-09-07T07:36:11.6417343Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/external/json/config remote.origin.url 2025-09-07T07:36:11.6433767Z Entering 'third_party/flash-attention' 2025-09-07T07:36:11.6486276Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/config remote.origin.url 2025-09-07T07:36:11.6506035Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T07:36:11.6549348Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/composable_kernel/config remote.origin.url 2025-09-07T07:36:11.6572136Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T07:36:11.6618850Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flash-attention/modules/csrc/cutlass/config remote.origin.url 2025-09-07T07:36:11.6643266Z Entering 'third_party/flatbuffers' 2025-09-07T07:36:11.6697861Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2025-09-07T07:36:11.6718048Z Entering 'third_party/fmt' 2025-09-07T07:36:11.6766412Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2025-09-07T07:36:11.6784710Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T07:36:11.6829385Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2025-09-07T07:36:11.6846761Z Entering 'third_party/gloo' 2025-09-07T07:36:11.6894302Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2025-09-07T07:36:11.6912444Z Entering 'third_party/googletest' 2025-09-07T07:36:11.6960684Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2025-09-07T07:36:11.6979475Z Entering 'third_party/ideep' 2025-09-07T07:36:11.7026224Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2025-09-07T07:36:11.7042679Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T07:36:11.7094549Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2025-09-07T07:36:11.7118686Z Entering 'third_party/ittapi' 2025-09-07T07:36:11.7164970Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2025-09-07T07:36:11.7187840Z Entering 'third_party/kineto' 2025-09-07T07:36:11.7229780Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2025-09-07T07:36:11.7244090Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T07:36:11.7291076Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2025-09-07T07:36:11.7304320Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T07:36:11.7350060Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2025-09-07T07:36:11.7365561Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T07:36:11.7417818Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2025-09-07T07:36:11.7432860Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T07:36:11.7485811Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2025-09-07T07:36:11.7506401Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T07:36:11.7549112Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2025-09-07T07:36:11.7565878Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T07:36:11.7613425Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2025-09-07T07:36:11.7634742Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T07:36:11.7679365Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2025-09-07T07:36:11.7694527Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T07:36:11.7744241Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2025-09-07T07:36:11.7763415Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T07:36:11.7813806Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2025-09-07T07:36:11.7829055Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T07:36:11.7880735Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2025-09-07T07:36:11.7902171Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T07:36:11.7946352Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2025-09-07T07:36:11.7970207Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T07:36:11.8016744Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2025-09-07T07:36:11.8038981Z Entering 'third_party/kleidiai' 2025-09-07T07:36:11.8087214Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kleidiai/config remote.origin.url 2025-09-07T07:36:11.8106222Z Entering 'third_party/mimalloc' 2025-09-07T07:36:11.8158947Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2025-09-07T07:36:11.8173484Z Entering 'third_party/nlohmann' 2025-09-07T07:36:11.8220557Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2025-09-07T07:36:11.8235266Z Entering 'third_party/onnx' 2025-09-07T07:36:11.8284514Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2025-09-07T07:36:11.8310783Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T07:36:11.8363106Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2025-09-07T07:36:11.8382729Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T07:36:11.8427880Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2025-09-07T07:36:11.8443272Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T07:36:11.8494026Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2025-09-07T07:36:11.8513411Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T07:36:11.8561755Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2025-09-07T07:36:11.8575187Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T07:36:11.8620882Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2025-09-07T07:36:11.8639839Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T07:36:11.8687620Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2025-09-07T07:36:11.8707951Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T07:36:11.8754374Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2025-09-07T07:36:11.8775997Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T07:36:11.8826180Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2025-09-07T07:36:11.8841204Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T07:36:11.8893499Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2025-09-07T07:36:11.8911679Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T07:36:11.8959350Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2025-09-07T07:36:11.8976640Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T07:36:11.9023778Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2025-09-07T07:36:11.9043043Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T07:36:11.9103817Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2025-09-07T07:36:11.9127497Z Entering 'third_party/pocketfft' 2025-09-07T07:36:11.9178002Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2025-09-07T07:36:11.9197634Z Entering 'third_party/protobuf' 2025-09-07T07:36:11.9240348Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2025-09-07T07:36:11.9264121Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T07:36:11.9307201Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2025-09-07T07:36:11.9324603Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T07:36:11.9378648Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2025-09-07T07:36:11.9396188Z Entering 'third_party/psimd' 2025-09-07T07:36:11.9442965Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2025-09-07T07:36:11.9465120Z Entering 'third_party/pthreadpool' 2025-09-07T07:36:11.9509715Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2025-09-07T07:36:11.9520030Z Entering 'third_party/pybind11' 2025-09-07T07:36:11.9573514Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2025-09-07T07:36:11.9590618Z Entering 'third_party/python-peachpy' 2025-09-07T07:36:11.9636281Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2025-09-07T07:36:11.9648426Z Entering 'third_party/sleef' 2025-09-07T07:36:11.9698270Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2025-09-07T07:36:11.9716727Z Entering 'third_party/tensorpipe' 2025-09-07T07:36:11.9761860Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2025-09-07T07:36:11.9783450Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T07:36:11.9825684Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2025-09-07T07:36:11.9838435Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T07:36:11.9888947Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2025-09-07T07:36:11.9905102Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T07:36:11.9953562Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2025-09-07T07:36:11.9967167Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T07:36:12.0018061Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2025-09-07T07:36:12.0028972Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T07:36:12.0078817Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2025-09-07T07:36:12.1025847Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2025-09-07T07:36:12.1353226Z Entering 'android/libs/fbjni' 2025-09-07T07:36:12.1390743Z Entering 'third_party/FP16' 2025-09-07T07:36:12.1431276Z Entering 'third_party/FXdiv' 2025-09-07T07:36:12.1476402Z Entering 'third_party/NNPACK' 2025-09-07T07:36:12.1517217Z Entering 'third_party/NVTX' 2025-09-07T07:36:12.1558898Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T07:36:12.1602747Z Entering 'third_party/XNNPACK' 2025-09-07T07:36:12.1658403Z Entering 'third_party/aiter' 2025-09-07T07:36:12.1702135Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T07:36:12.1741107Z Entering 'third_party/benchmark' 2025-09-07T07:36:12.1781179Z Entering 'third_party/composable_kernel' 2025-09-07T07:36:12.1829067Z Entering 'third_party/cpp-httplib' 2025-09-07T07:36:12.1868076Z Entering 'third_party/cpuinfo' 2025-09-07T07:36:12.1906308Z Entering 'third_party/cudnn_frontend' 2025-09-07T07:36:12.1945553Z Entering 'third_party/cutlass' 2025-09-07T07:36:12.1993382Z Entering 'third_party/fbgemm' 2025-09-07T07:36:12.2029631Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T07:36:12.2068023Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T07:36:12.2110446Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T07:36:12.2159555Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T07:36:12.2200364Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T07:36:12.2237953Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T07:36:12.2278426Z Entering 'third_party/fbgemm/external/json' 2025-09-07T07:36:12.2322315Z Entering 'third_party/flash-attention' 2025-09-07T07:36:12.2365839Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T07:36:12.2407022Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T07:36:12.2459351Z Entering 'third_party/flatbuffers' 2025-09-07T07:36:12.2503979Z Entering 'third_party/fmt' 2025-09-07T07:36:12.2537236Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T07:36:12.2580494Z Entering 'third_party/gloo' 2025-09-07T07:36:12.2621772Z Entering 'third_party/googletest' 2025-09-07T07:36:12.2658978Z Entering 'third_party/ideep' 2025-09-07T07:36:12.2697366Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T07:36:12.2742019Z Entering 'third_party/ittapi' 2025-09-07T07:36:12.2784539Z Entering 'third_party/kineto' 2025-09-07T07:36:12.2827778Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T07:36:12.2866624Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T07:36:12.2911147Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T07:36:12.2954732Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T07:36:12.2998701Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T07:36:12.3037249Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T07:36:12.3080931Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T07:36:12.3125423Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T07:36:12.3166262Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T07:36:12.3208754Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T07:36:12.3247403Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T07:36:12.3287568Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T07:36:12.3328762Z Entering 'third_party/kleidiai' 2025-09-07T07:36:12.3367743Z Entering 'third_party/mimalloc' 2025-09-07T07:36:12.3411847Z Entering 'third_party/nlohmann' 2025-09-07T07:36:12.3449780Z Entering 'third_party/onnx' 2025-09-07T07:36:12.3502537Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T07:36:12.3543873Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T07:36:12.3588011Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T07:36:12.3624945Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T07:36:12.3670984Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T07:36:12.3714092Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T07:36:12.3761749Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T07:36:12.3799362Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T07:36:12.3839370Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T07:36:12.3885759Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T07:36:12.3925429Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T07:36:12.3974316Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T07:36:12.4026527Z Entering 'third_party/pocketfft' 2025-09-07T07:36:12.4069805Z Entering 'third_party/protobuf' 2025-09-07T07:36:12.4115610Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T07:36:12.4154518Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T07:36:12.4203188Z Entering 'third_party/psimd' 2025-09-07T07:36:12.4240941Z Entering 'third_party/pthreadpool' 2025-09-07T07:36:12.4283680Z Entering 'third_party/pybind11' 2025-09-07T07:36:12.4322692Z Entering 'third_party/python-peachpy' 2025-09-07T07:36:12.4369359Z Entering 'third_party/sleef' 2025-09-07T07:36:12.4409362Z Entering 'third_party/tensorpipe' 2025-09-07T07:36:12.4451143Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T07:36:12.4492162Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T07:36:12.4535300Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T07:36:12.4585040Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T07:36:12.4619714Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T07:36:12.4683490Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2025-09-07T07:36:12.5001683Z Entering 'android/libs/fbjni' 2025-09-07T07:36:12.5039095Z Entering 'third_party/FP16' 2025-09-07T07:36:12.5085014Z Entering 'third_party/FXdiv' 2025-09-07T07:36:12.5121636Z Entering 'third_party/NNPACK' 2025-09-07T07:36:12.5163697Z Entering 'third_party/NVTX' 2025-09-07T07:36:12.5205156Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T07:36:12.5243366Z Entering 'third_party/XNNPACK' 2025-09-07T07:36:12.5301462Z Entering 'third_party/aiter' 2025-09-07T07:36:12.5357327Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T07:36:12.5401369Z Entering 'third_party/benchmark' 2025-09-07T07:36:12.5439616Z Entering 'third_party/composable_kernel' 2025-09-07T07:36:12.5483454Z Entering 'third_party/cpp-httplib' 2025-09-07T07:36:12.5524355Z Entering 'third_party/cpuinfo' 2025-09-07T07:36:12.5564656Z Entering 'third_party/cudnn_frontend' 2025-09-07T07:36:12.5603480Z Entering 'third_party/cutlass' 2025-09-07T07:36:12.5655931Z Entering 'third_party/fbgemm' 2025-09-07T07:36:12.5695014Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T07:36:12.5731199Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T07:36:12.5785865Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T07:36:12.5820583Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T07:36:12.5865972Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T07:36:12.5907198Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T07:36:12.5943788Z Entering 'third_party/fbgemm/external/json' 2025-09-07T07:36:12.5992741Z Entering 'third_party/flash-attention' 2025-09-07T07:36:12.6031224Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T07:36:12.6084372Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T07:36:12.6134816Z Entering 'third_party/flatbuffers' 2025-09-07T07:36:12.6177391Z Entering 'third_party/fmt' 2025-09-07T07:36:12.6217473Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T07:36:12.6259229Z Entering 'third_party/gloo' 2025-09-07T07:36:12.6294310Z Entering 'third_party/googletest' 2025-09-07T07:36:12.6328614Z Entering 'third_party/ideep' 2025-09-07T07:36:12.6376111Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T07:36:12.6423052Z Entering 'third_party/ittapi' 2025-09-07T07:36:12.6465570Z Entering 'third_party/kineto' 2025-09-07T07:36:12.6506829Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T07:36:12.6542440Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T07:36:12.6583432Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T07:36:12.6625511Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T07:36:12.6672365Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T07:36:12.6711602Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T07:36:12.6761904Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T07:36:12.6800683Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T07:36:12.6841018Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T07:36:12.6884592Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T07:36:12.6926501Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T07:36:12.6978981Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T07:36:12.7021009Z Entering 'third_party/kleidiai' 2025-09-07T07:36:12.7069502Z Entering 'third_party/mimalloc' 2025-09-07T07:36:12.7113015Z Entering 'third_party/nlohmann' 2025-09-07T07:36:12.7146021Z Entering 'third_party/onnx' 2025-09-07T07:36:12.7201890Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T07:36:12.7242386Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T07:36:12.7285895Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T07:36:12.7323176Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T07:36:12.7367738Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T07:36:12.7411329Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T07:36:12.7456698Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T07:36:12.7493161Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T07:36:12.7527160Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T07:36:12.7567971Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T07:36:12.7608784Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T07:36:12.7660633Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T07:36:12.7717025Z Entering 'third_party/pocketfft' 2025-09-07T07:36:12.7758850Z Entering 'third_party/protobuf' 2025-09-07T07:36:12.7804489Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T07:36:12.7847315Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T07:36:12.7894069Z Entering 'third_party/psimd' 2025-09-07T07:36:12.7933361Z Entering 'third_party/pthreadpool' 2025-09-07T07:36:12.7976847Z Entering 'third_party/pybind11' 2025-09-07T07:36:12.8018334Z Entering 'third_party/python-peachpy' 2025-09-07T07:36:12.8055848Z Entering 'third_party/sleef' 2025-09-07T07:36:12.8097623Z Entering 'third_party/tensorpipe' 2025-09-07T07:36:12.8136435Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T07:36:12.8179538Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T07:36:12.8219890Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T07:36:12.8261328Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T07:36:12.8301173Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T07:36:12.8357000Z ##[endgroup] 2025-09-07T07:36:12.8395228Z [command]/usr/bin/git log -1 --format=%H 2025-09-07T07:36:12.8424567Z 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:36:12.8541889Z ##[group]Run cd "${GITHUB_WORKSPACE}" 2025-09-07T07:36:12.8542160Z cd "${GITHUB_WORKSPACE}" 2025-09-07T07:36:12.8542381Z # Clean stale submodule dirs 2025-09-07T07:36:12.8542600Z if [ -z "${NO_SUDO}" ]; then 2025-09-07T07:36:12.8542869Z  sudo git submodule foreach --recursive git clean -ffdx 2025-09-07T07:36:12.8543118Z else 2025-09-07T07:36:12.8543329Z  git submodule foreach --recursive git clean -ffdx 2025-09-07T07:36:12.8543565Z fi 2025-09-07T07:36:12.8551303Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:12.8551551Z env: 2025-09-07T07:36:12.8551724Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:12.8551922Z NO_SUDO: true 2025-09-07T07:36:12.8552081Z ##[endgroup] 2025-09-07T07:36:12.8876653Z Entering 'android/libs/fbjni' 2025-09-07T07:36:12.8910782Z Entering 'third_party/FP16' 2025-09-07T07:36:12.8938327Z Entering 'third_party/FXdiv' 2025-09-07T07:36:12.8975215Z Entering 'third_party/NNPACK' 2025-09-07T07:36:12.9011910Z Entering 'third_party/NVTX' 2025-09-07T07:36:12.9047303Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T07:36:12.9075785Z Entering 'third_party/XNNPACK' 2025-09-07T07:36:12.9178403Z Entering 'third_party/aiter' 2025-09-07T07:36:12.9218306Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T07:36:12.9303560Z Entering 'third_party/benchmark' 2025-09-07T07:36:12.9335288Z Entering 'third_party/composable_kernel' 2025-09-07T07:36:12.9429713Z Entering 'third_party/cpp-httplib' 2025-09-07T07:36:12.9462108Z Entering 'third_party/cpuinfo' 2025-09-07T07:36:12.9495327Z Entering 'third_party/cudnn_frontend' 2025-09-07T07:36:12.9528177Z Entering 'third_party/cutlass' 2025-09-07T07:36:12.9606624Z Entering 'third_party/fbgemm' 2025-09-07T07:36:12.9655395Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T07:36:12.9685908Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T07:36:12.9768242Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T07:36:12.9802701Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T07:36:12.9882209Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T07:36:12.9916133Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T07:36:12.9941535Z Entering 'third_party/fbgemm/external/json' 2025-09-07T07:36:12.9988468Z Entering 'third_party/flash-attention' 2025-09-07T07:36:13.0021622Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T07:36:13.0103299Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T07:36:13.0181554Z Entering 'third_party/flatbuffers' 2025-09-07T07:36:13.0245495Z Entering 'third_party/fmt' 2025-09-07T07:36:13.0279816Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T07:36:13.0318834Z Entering 'third_party/gloo' 2025-09-07T07:36:13.0353735Z Entering 'third_party/googletest' 2025-09-07T07:36:13.0388399Z Entering 'third_party/ideep' 2025-09-07T07:36:13.0416023Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T07:36:13.0490844Z Entering 'third_party/ittapi' 2025-09-07T07:36:13.0524754Z Entering 'third_party/kineto' 2025-09-07T07:36:13.0556629Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T07:36:13.0593086Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T07:36:13.0631131Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T07:36:13.0661105Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T07:36:13.0695428Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T07:36:13.0718076Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T07:36:13.0753549Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T07:36:13.0780138Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T07:36:13.0815930Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T07:36:13.0851964Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T07:36:13.0881407Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T07:36:13.0915552Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T07:36:13.0947113Z Entering 'third_party/kleidiai' 2025-09-07T07:36:13.0984588Z Entering 'third_party/mimalloc' 2025-09-07T07:36:13.1019383Z Entering 'third_party/nlohmann' 2025-09-07T07:36:13.1058377Z Entering 'third_party/onnx' 2025-09-07T07:36:13.1277762Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T07:36:13.1314048Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T07:36:13.1362228Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T07:36:13.1393433Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T07:36:13.1426656Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T07:36:13.1459198Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T07:36:13.1502077Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T07:36:13.1529814Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T07:36:13.1560903Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T07:36:13.1587266Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T07:36:13.1629049Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T07:36:13.1660725Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T07:36:13.1839238Z Entering 'third_party/pocketfft' 2025-09-07T07:36:13.1873271Z Entering 'third_party/protobuf' 2025-09-07T07:36:13.1930592Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T07:36:13.1964738Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T07:36:13.2005497Z Entering 'third_party/psimd' 2025-09-07T07:36:13.2036855Z Entering 'third_party/pthreadpool' 2025-09-07T07:36:13.2065743Z Entering 'third_party/pybind11' 2025-09-07T07:36:13.2101816Z Entering 'third_party/python-peachpy' 2025-09-07T07:36:13.2132766Z Entering 'third_party/sleef' 2025-09-07T07:36:13.2165569Z Entering 'third_party/tensorpipe' 2025-09-07T07:36:13.2194181Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T07:36:13.2228244Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T07:36:13.2260091Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T07:36:13.2295729Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T07:36:13.2322269Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T07:36:13.2450557Z Prepare all required actions 2025-09-07T07:36:13.2451028Z Getting action download info 2025-09-07T07:36:13.4028463Z ##[group]Run ./.github/actions/setup-linux 2025-09-07T07:36:13.4028699Z env: 2025-09-07T07:36:13.4028871Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:13.4029050Z ##[endgroup] 2025-09-07T07:36:13.4062150Z ##[group]Run set -euo pipefail 2025-09-07T07:36:13.4062417Z set -euo pipefail 2025-09-07T07:36:13.4062618Z function get_ec2_metadata() { 2025-09-07T07:36:13.4062863Z  # Pulled from instance metadata endpoint for EC2 2025-09-07T07:36:13.4063261Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2025-09-07T07:36:13.4063603Z  category=$1 2025-09-07T07:36:13.4063842Z  # If it is GCP runner (runner name contains gcp), do not run this 2025-09-07T07:36:13.4064106Z  runner_name_str=i-06b49f47ba3e131d7 2025-09-07T07:36:13.4064374Z  if [[ -f /.inarc ]]; then 2025-09-07T07:36:13.4064598Z  echo "ARC Runner, no info on ec2 metadata" 2025-09-07T07:36:13.4064979Z  elif [[ $runner_name_str == *"gcp"* ]]; then 2025-09-07T07:36:13.4065268Z  echo "Runner is from Google Cloud Platform, No info on ec2 metadata" 2025-09-07T07:36:13.4065529Z  else 2025-09-07T07:36:13.4066035Z  curl -H "X-aws-ec2-metadata-token: $(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 30")" -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2025-09-07T07:36:13.4066540Z  fi 2025-09-07T07:36:13.4066689Z } 2025-09-07T07:36:13.4066872Z echo "ami-id: $(get_ec2_metadata ami-id)" 2025-09-07T07:36:13.4067143Z echo "instance-id: $(get_ec2_metadata instance-id)" 2025-09-07T07:36:13.4067438Z echo "instance-type: $(get_ec2_metadata instance-type)" 2025-09-07T07:36:13.4067694Z echo "system info $(uname -a)" 2025-09-07T07:36:13.4072540Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:13.4072786Z env: 2025-09-07T07:36:13.4072955Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:13.4073127Z ##[endgroup] 2025-09-07T07:36:13.4225091Z ami-id: ami-05ffe3c48a9991133 2025-09-07T07:36:13.4327171Z instance-id: i-06b49f47ba3e131d7 2025-09-07T07:36:13.4424961Z instance-type: m7i-flex.8xlarge 2025-09-07T07:36:13.4434170Z system info Linux ip-10-0-56-51.ec2.internal 6.1.141-155.222.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 17 10:29:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux 2025-09-07T07:36:13.4452535Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-07T07:36:13.4453124Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-07T07:36:13.4457638Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:13.4457892Z env: 2025-09-07T07:36:13.4458047Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:13.4458227Z ##[endgroup] 2025-09-07T07:36:13.4501214Z ##[group]Run if systemctl is-active --quiet docker; then 2025-09-07T07:36:13.4501507Z if systemctl is-active --quiet docker; then 2025-09-07T07:36:13.4501757Z  echo "Docker daemon is running..."; 2025-09-07T07:36:13.4501971Z else 2025-09-07T07:36:13.4502199Z  echo "Starting docker daemon..." && sudo systemctl start docker; 2025-09-07T07:36:13.4502461Z fi 2025-09-07T07:36:13.4506155Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:13.4506398Z env: 2025-09-07T07:36:13.4506551Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:13.4506730Z ##[endgroup] 2025-09-07T07:36:13.4631345Z Docker daemon is running... 2025-09-07T07:36:13.4660239Z ##[group]Run nick-fields/retry@v3.0.0 2025-09-07T07:36:13.4660474Z with: 2025-09-07T07:36:13.4660632Z shell: bash 2025-09-07T07:36:13.4660948Z timeout_minutes: 5 2025-09-07T07:36:13.4661135Z max_attempts: 3 2025-09-07T07:36:13.4661322Z retry_wait_seconds: 30 2025-09-07T07:36:13.4662734Z command: AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" # For LF Runners we need to make sure we also login to Meta's ECR docker registry too. META_AWS_ACCOUNT_ID=308535385114 if [ "$AWS_ACCOUNT_ID" != "$META_AWS_ACCOUNT_ID" ] ; then aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$META_AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" fi 2025-09-07T07:36:13.4664087Z polling_interval_seconds: 1 2025-09-07T07:36:13.4664300Z warning_on_retry: true 2025-09-07T07:36:13.4664491Z continue_on_error: false 2025-09-07T07:36:13.4664680Z env: 2025-09-07T07:36:13.4664848Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:13.4665034Z AWS_RETRY_MODE: standard 2025-09-07T07:36:13.4665222Z AWS_MAX_ATTEMPTS: 5 2025-09-07T07:36:13.4665495Z AWS_DEFAULT_REGION: us-east-1 2025-09-07T07:36:13.4665698Z ##[endgroup] 2025-09-07T07:36:14.4081132Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-09-07T07:36:14.4081573Z Configure a credential helper to remove this warning. See 2025-09-07T07:36:14.4081981Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-09-07T07:36:14.4087312Z 2025-09-07T07:36:14.4093105Z Login Succeeded 2025-09-07T07:36:14.5367347Z Command completed after 1 attempt(s). 2025-09-07T07:36:14.5421889Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-09-07T07:36:14.5422237Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-09-07T07:36:14.5422524Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-09-07T07:36:14.5428359Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:14.5428605Z env: 2025-09-07T07:36:14.5428769Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:14.5429008Z ##[endgroup] 2025-09-07T07:36:14.5501660Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-09-07T07:36:14.5502036Z # ignore expansion of "docker ps -q" since it could be empty 2025-09-07T07:36:14.5502307Z # shellcheck disable=SC2046 2025-09-07T07:36:14.5502544Z docker stop $(docker ps -q) || true 2025-09-07T07:36:14.5502789Z # Prune all of the docker images 2025-09-07T07:36:14.5503007Z docker system prune -af 2025-09-07T07:36:14.5507162Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:14.5507404Z env: 2025-09-07T07:36:14.5507565Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:14.5507748Z ##[endgroup] 2025-09-07T07:36:14.6008094Z "docker stop" requires at least 1 argument. 2025-09-07T07:36:14.6008429Z See 'docker stop --help'. 2025-09-07T07:36:14.6008611Z 2025-09-07T07:36:14.6008765Z Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] 2025-09-07T07:36:14.6008948Z 2025-09-07T07:36:14.6009062Z Stop one or more running containers 2025-09-07T07:36:14.6218751Z Total reclaimed space: 0B 2025-09-07T07:36:14.6251880Z ##[group]Run set +e 2025-09-07T07:36:14.6252115Z set +e 2025-09-07T07:36:14.6252307Z set -x 2025-09-07T07:36:14.6252480Z  2025-09-07T07:36:14.6252676Z PT_DOMAIN=download.pytorch.org 2025-09-07T07:36:14.6253076Z # TODO: Flaky access to download.pytorch.org https://github.com/pytorch/pytorch/issues/100400, 2025-09-07T07:36:14.6253573Z # cleaning this up once the issue is fixed. There are more than one resolved IP here, the last 2025-09-07T07:36:14.6253937Z # one is returned at random 2025-09-07T07:36:14.6254226Z RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" | tail -n1) 2025-09-07T07:36:14.6254485Z  2025-09-07T07:36:14.6254787Z if [ -z "${RESOLVED_IP}" ]; then 2025-09-07T07:36:14.6255096Z  echo "Couldn't resolve ${PT_DOMAIN}, retrying with Google DNS..." 2025-09-07T07:36:14.6255470Z  RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" @8.8.8.8 | tail -n1) 2025-09-07T07:36:14.6255744Z  2025-09-07T07:36:14.6255926Z  if [ -z "${RESOLVED_IP}" ]; then 2025-09-07T07:36:14.6256192Z  echo "Couldn't resolve ${PT_DOMAIN}, exiting..." 2025-09-07T07:36:14.6256427Z  exit 1 2025-09-07T07:36:14.6256592Z  fi 2025-09-07T07:36:14.6256740Z fi 2025-09-07T07:36:14.6256893Z  2025-09-07T07:36:14.6257079Z if grep -r "${PT_DOMAIN}" /etc/hosts; then 2025-09-07T07:36:14.6257330Z  # Clean up any old records first 2025-09-07T07:36:14.6257554Z  sudo sed -i "/${PT_DOMAIN}/d" /etc/hosts 2025-09-07T07:36:14.6257766Z fi 2025-09-07T07:36:14.6257911Z  2025-09-07T07:36:14.6258120Z echo "${RESOLVED_IP} ${PT_DOMAIN}" | sudo tee -a /etc/hosts 2025-09-07T07:36:14.6258358Z cat /etc/hosts 2025-09-07T07:36:14.6262944Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:14.6263290Z env: 2025-09-07T07:36:14.6263460Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:14.6263638Z ##[endgroup] 2025-09-07T07:36:14.6286402Z + PT_DOMAIN=download.pytorch.org 2025-09-07T07:36:14.6297482Z ++ dig -4 +short download.pytorch.org 2025-09-07T07:36:14.6297968Z ++ tail -n1 2025-09-07T07:36:14.7171474Z + RESOLVED_IP=18.160.10.76 2025-09-07T07:36:14.7171757Z + '[' -z 18.160.10.76 ']' 2025-09-07T07:36:14.7171983Z + grep -r download.pytorch.org /etc/hosts 2025-09-07T07:36:14.7191781Z + sudo tee -a /etc/hosts 2025-09-07T07:36:14.7192185Z + echo '18.160.10.76 download.pytorch.org' 2025-09-07T07:36:14.9868178Z 18.160.10.76 download.pytorch.org 2025-09-07T07:36:14.9899550Z + cat /etc/hosts 2025-09-07T07:36:14.9912605Z 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 2025-09-07T07:36:14.9924211Z ::1 localhost6 localhost6.localdomain6 2025-09-07T07:36:14.9924574Z 18.160.10.76 download.pytorch.org 2025-09-07T07:36:15.0026328Z ##[group]Run pytorch/test-infra/.github/actions/calculate-docker-image@main 2025-09-07T07:36:15.0026633Z with: 2025-09-07T07:36:15.0027182Z docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0027773Z use-custom-docker-registry: true 2025-09-07T07:36:15.0028005Z docker-build-dir: .ci/docker 2025-09-07T07:36:15.0028226Z docker-build-script: ./build.sh 2025-09-07T07:36:15.0028433Z working-directory: . 2025-09-07T07:36:15.0028676Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:15.0028934Z force-push: false 2025-09-07T07:36:15.0029100Z env: 2025-09-07T07:36:15.0029252Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:15.0029435Z ##[endgroup] 2025-09-07T07:36:15.0042432Z ##[group]Run set -ex 2025-09-07T07:36:15.0042677Z set -ex 2025-09-07T07:36:15.0042855Z  2025-09-07T07:36:15.0043178Z # If the docker build directory or the build script doesn't exist, the action will 2025-09-07T07:36:15.0043645Z # gracefully return the docker image name as it is. Pulling docker image in Linux 2025-09-07T07:36:15.0044033Z # job could then download the pre-built image as usual 2025-09-07T07:36:15.0044502Z if [[ -d "${DOCKER_BUILD_DIR}" ]] && [[ -f "${DOCKER_BUILD_DIR}/${DOCKER_BUILD_SCRIPT}" ]] && [[ "${USE_CUSTOM_DOCKER_REGISTRY}" == "true" ]]; then 2025-09-07T07:36:15.0044942Z  echo "skip=false" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0045441Z else 2025-09-07T07:36:15.0045658Z  echo "skip=true" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0045977Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0046268Z  2025-09-07T07:36:15.0046665Z  echo "Not using custom ECR registry. Either it was not requested or there is no Docker build script in the ${REPO_NAME} repo..." 2025-09-07T07:36:15.0047197Z  exit 0 2025-09-07T07:36:15.0047371Z fi 2025-09-07T07:36:15.0047545Z  2025-09-07T07:36:15.0047806Z if [[ "${DOCKER_IMAGE_NAME}" == *"${DOCKER_REGISTRY}/${REPO_NAME}"* ]]; then 2025-09-07T07:36:15.0048229Z  # The docker image name already includes the ECR prefix and tag, so we can just 2025-09-07T07:36:15.0048603Z  # use it as it is, but first let's extract the tag 2025-09-07T07:36:15.0048948Z  DOCKER_TAG=$(echo "${DOCKER_IMAGE_NAME}" | awk -F '[:,]' '{print $2}') 2025-09-07T07:36:15.0049276Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0049589Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0049852Z else 2025-09-07T07:36:15.0050041Z  if [[ "${DOCKER_IMAGE_NAME}" == *:* ]]; then 2025-09-07T07:36:15.0050288Z  CUSTOM_TAG_PREFIX=${DOCKER_IMAGE_NAME#*:} 2025-09-07T07:36:15.0050556Z  DOCKER_IMAGE_NAME=${DOCKER_IMAGE_NAME%%:*} 2025-09-07T07:36:15.0050912Z  fi 2025-09-07T07:36:15.0051226Z  DOCKER_TAG=${CUSTOM_TAG_PREFIX:+${CUSTOM_TAG_PREFIX}-}$(git rev-parse HEAD:"${DOCKER_BUILD_DIR}") 2025-09-07T07:36:15.0051617Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0052032Z  echo "docker-image=${DOCKER_REGISTRY}/${REPO_NAME}/${DOCKER_IMAGE_NAME}:${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0052492Z  echo "custom-tag-prefix=${CUSTOM_TAG_PREFIX}" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0052765Z fi 2025-09-07T07:36:15.0059870Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:15.0060115Z env: 2025-09-07T07:36:15.0060282Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:15.0060469Z REPO_NAME: pytorch 2025-09-07T07:36:15.0061098Z DOCKER_IMAGE_NAME: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0061643Z DOCKER_BUILD_DIR: .ci/docker 2025-09-07T07:36:15.0061846Z DOCKER_BUILD_SCRIPT: ./build.sh 2025-09-07T07:36:15.0062104Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:15.0062368Z USE_CUSTOM_DOCKER_REGISTRY: true 2025-09-07T07:36:15.0062564Z CUSTOM_TAG_PREFIX: 2025-09-07T07:36:15.0062740Z ##[endgroup] 2025-09-07T07:36:15.0084601Z + [[ -d .ci/docker ]] 2025-09-07T07:36:15.0084852Z + [[ -f .ci/docker/./build.sh ]] 2025-09-07T07:36:15.0085081Z + [[ true == \t\r\u\e ]] 2025-09-07T07:36:15.0085278Z + echo skip=false 2025-09-07T07:36:15.0086003Z + [[ 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 == *\3\0\8\5\3\5\3\8\5\1\1\4\.\d\k\r\.\e\c\r\.\u\s\-\e\a\s\t\-\1\.\a\m\a\z\o\n\a\w\s\.\c\o\m\/\p\y\t\o\r\c\h* ]] 2025-09-07T07:36:15.0093404Z ++ awk -F '[:,]' '{print $2}' 2025-09-07T07:36:15.0094143Z ++ echo 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0114664Z + DOCKER_TAG=pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0115500Z + echo docker-tag=pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0116385Z + echo docker-image=308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0136377Z ##[group]Run set +e 2025-09-07T07:36:15.0136604Z set +e 2025-09-07T07:36:15.0136781Z set -x 2025-09-07T07:36:15.0136945Z  2025-09-07T07:36:15.0137103Z login() { 2025-09-07T07:36:15.0137430Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-09-07T07:36:15.0137782Z } 2025-09-07T07:36:15.0137949Z  2025-09-07T07:36:15.0138102Z retry () { 2025-09-07T07:36:15.0138306Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-09-07T07:36:15.0138529Z } 2025-09-07T07:36:15.0138684Z  2025-09-07T07:36:15.0138853Z retry login "${DOCKER_REGISTRY}" 2025-09-07T07:36:15.0139070Z  2025-09-07T07:36:15.0139237Z START_TIME=$(date +%s) 2025-09-07T07:36:15.0139456Z # Wait up to 120 minutes 2025-09-07T07:36:15.0139718Z while [[ $(( $(date +%s) - 7200 )) -lt $START_TIME ]]; do 2025-09-07T07:36:15.0140040Z  # Check if image already exists, if it does then skip building it 2025-09-07T07:36:15.0140367Z  if docker manifest inspect "${DOCKER_IMAGE}"; then 2025-09-07T07:36:15.0140616Z  exit 0 2025-09-07T07:36:15.0140789Z  fi 2025-09-07T07:36:15.0140944Z  2025-09-07T07:36:15.0141213Z  # NB: This flag is used by Docker build workflow to push the image to ECR, so we can 2025-09-07T07:36:15.0141714Z  # use this to differentiate between the Docker build and regular build jobs. For the 2025-09-07T07:36:15.0142134Z  # latter, it will wait for the Docker images to become available before continuing 2025-09-07T07:36:15.0142479Z  if [ "${DOCKER_PUSH:-false}" == "true" ]; then 2025-09-07T07:36:15.0142750Z  # It's a Docker build job, let's build the image 2025-09-07T07:36:15.0142991Z  break 2025-09-07T07:36:15.0143165Z  else 2025-09-07T07:36:15.0143404Z  # It's a regular build job, wait for the image to become available 2025-09-07T07:36:15.0143676Z  sleep 300 2025-09-07T07:36:15.0143862Z  fi 2025-09-07T07:36:15.0144026Z done 2025-09-07T07:36:15.0144185Z  2025-09-07T07:36:15.0144425Z # NB: This part requires a full checkout. Otherwise, the merge base will 2025-09-07T07:36:15.0144872Z # be empty. The default action would be to continue rebuild the image 2025-09-07T07:36:15.0145613Z if [[ "$BASE_REVISION" = "$(git rev-parse HEAD)" ]]; then 2025-09-07T07:36:15.0145931Z  # if we're on the base branch then use the parent commit 2025-09-07T07:36:15.0146222Z  MERGE_BASE=$(git rev-parse HEAD~) 2025-09-07T07:36:15.0146441Z else 2025-09-07T07:36:15.0146700Z  # otherwise we're on a PR, so use the most recent base commit 2025-09-07T07:36:15.0147028Z  MERGE_BASE=$(git merge-base HEAD "$BASE_REVISION") 2025-09-07T07:36:15.0147282Z fi 2025-09-07T07:36:15.0147475Z  2025-09-07T07:36:15.0147641Z if [[ -z "${MERGE_BASE}" ]]; then 2025-09-07T07:36:15.0147867Z  echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0148078Z  2025-09-07T07:36:15.0148370Z  echo "Finding merge base only works with full checkout, please set fetch-depth to 0, continuing ..." 2025-09-07T07:36:15.0148698Z  exit 0 2025-09-07T07:36:15.0148846Z fi 2025-09-07T07:36:15.0148991Z  2025-09-07T07:36:15.0149197Z if ! git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}"; then 2025-09-07T07:36:15.0149596Z  echo "Directory '${DOCKER_BUILD_DIR}' not found in commit $MERGE_BASE, you should rebase onto a more recent commit" 2025-09-07T07:36:15.0149935Z  exit 1 2025-09-07T07:36:15.0150083Z fi 2025-09-07T07:36:15.0150229Z  2025-09-07T07:36:15.0150461Z PREVIOUS_DOCKER_TAG=$(git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}") 2025-09-07T07:36:15.0150848Z # If no image exists but the hash is the same as the previous hash then we should error out here 2025-09-07T07:36:15.0151195Z if [[ "${PREVIOUS_DOCKER_TAG}" == "${DOCKER_TAG}" ]]; then 2025-09-07T07:36:15.0151596Z  echo "WARNING: Something has gone wrong and the previous image isn't available for the merge-base of your branch" 2025-09-07T07:36:15.0152036Z  echo " Will re-build docker image to store in local cache, TTS may be longer" 2025-09-07T07:36:15.0152303Z fi 2025-09-07T07:36:15.0152446Z  2025-09-07T07:36:15.0152616Z echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2025-09-07T07:36:15.0156845Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:15.0157086Z env: 2025-09-07T07:36:15.0157252Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:15.0157435Z DOCKER_BUILD_DIR: .ci/docker 2025-09-07T07:36:15.0157668Z BASE_REVISION: 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:36:15.0158250Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0158967Z DOCKER_TAG: pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.0159419Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:15.0159748Z DOCKER_PUSH: 2025-09-07T07:36:15.0159908Z ##[endgroup] 2025-09-07T07:36:15.0183947Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:15.0184230Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:15.0190861Z + aws ecr get-login-password --region us-east-1 2025-09-07T07:36:15.0195804Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:15.4506601Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-09-07T07:36:15.4507001Z Login Succeeded 2025-09-07T07:36:15.4511575Z Configure a credential helper to remove this warning. See 2025-09-07T07:36:15.4512181Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-09-07T07:36:15.4512460Z 2025-09-07T07:36:15.4527215Z ++ date +%s 2025-09-07T07:36:15.4536778Z + START_TIME=1757230575 2025-09-07T07:36:15.4539636Z ++ date +%s 2025-09-07T07:36:15.4551152Z + [[ 1757223375 -lt 1757230575 ]] 2025-09-07T07:36:15.4551900Z + docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:15.7247076Z { 2025-09-07T07:36:15.7247433Z "schemaVersion": 2, 2025-09-07T07:36:15.7247802Z "mediaType": "application/vnd.docker.distribution.manifest.v2+json", 2025-09-07T07:36:15.7248149Z "config": { 2025-09-07T07:36:15.7248419Z "mediaType": "application/vnd.docker.container.image.v1+json", 2025-09-07T07:36:15.7248719Z "size": 30269, 2025-09-07T07:36:15.7249056Z "digest": "sha256:662d8c9dfc7db2f5d004293de4f2b7647941dee4c916479ef082d17fcdfd9c47" 2025-09-07T07:36:15.7249402Z }, 2025-09-07T07:36:15.7249572Z "layers": [ 2025-09-07T07:36:15.7249750Z { 2025-09-07T07:36:15.7250015Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7250332Z "size": 30448359, 2025-09-07T07:36:15.7250686Z "digest": "sha256:e6fdc8487bfe6d764301ef3634bc6c043841dc3ab05ca14f81e69c0f92562d46" 2025-09-07T07:36:15.7251020Z }, 2025-09-07T07:36:15.7251171Z { 2025-09-07T07:36:15.7251414Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7251704Z "size": 1554, 2025-09-07T07:36:15.7252000Z "digest": "sha256:18a5ee5b0e2e283bf6d7b9c4c312b0448c75eff1c43446c22c5139a3aeec97fe" 2025-09-07T07:36:15.7252322Z }, 2025-09-07T07:36:15.7252480Z { 2025-09-07T07:36:15.7252703Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7252965Z "size": 313297813, 2025-09-07T07:36:15.7253250Z "digest": "sha256:572424b92528ee46c84fdf3e9e1f5fd75e302621ad75dcf4257ad06778885094" 2025-09-07T07:36:15.7253547Z }, 2025-09-07T07:36:15.7253687Z { 2025-09-07T07:36:15.7253911Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7254200Z "size": 793, 2025-09-07T07:36:15.7254498Z "digest": "sha256:1c35b7d4b67c6769f59f96a643d69c214c5b00291a4968cdd395eedbce82b9c0" 2025-09-07T07:36:15.7254812Z }, 2025-09-07T07:36:15.7254949Z { 2025-09-07T07:36:15.7255183Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7255468Z "size": 106, 2025-09-07T07:36:15.7255771Z "digest": "sha256:68c20f3c23bb0bddb9b69e6ce2e45bcd5b1fcfd9b37dbe3de26b8a5f0e81ff13" 2025-09-07T07:36:15.7256087Z }, 2025-09-07T07:36:15.7256231Z { 2025-09-07T07:36:15.7256465Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7256751Z "size": 704, 2025-09-07T07:36:15.7257028Z "digest": "sha256:7efa39950d3273a15b20bc5f6659373b2b4eb62e36328d96b289834c48d2e408" 2025-09-07T07:36:15.7257343Z }, 2025-09-07T07:36:15.7257486Z { 2025-09-07T07:36:15.7257715Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7257995Z "size": 1214, 2025-09-07T07:36:15.7258292Z "digest": "sha256:a10eb16a7271e996ea9f1d769ba6bd2ec69358f2a79cf26649595a8cea38275f" 2025-09-07T07:36:15.7258609Z }, 2025-09-07T07:36:15.7259130Z { 2025-09-07T07:36:15.7259353Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7259636Z "size": 485, 2025-09-07T07:36:15.7259916Z "digest": "sha256:7d52cf57965449440c17f257fe4c522f9685019961eaa9853d7c820cfe39f5cc" 2025-09-07T07:36:15.7260301Z }, 2025-09-07T07:36:15.7260437Z { 2025-09-07T07:36:15.7260665Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7260948Z "size": 110343705, 2025-09-07T07:36:15.7261246Z "digest": "sha256:cb6a20fcf4e24ec2e1f72ecf361b26e058f3e6194947a9b3a25312223d43516e" 2025-09-07T07:36:15.7261566Z }, 2025-09-07T07:36:15.7261702Z { 2025-09-07T07:36:15.7261929Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7262214Z "size": 4787, 2025-09-07T07:36:15.7262504Z "digest": "sha256:46fb6a8b3e1d4eac9b3a21577824410003ed38f194b4b1486b747e324b32ef6a" 2025-09-07T07:36:15.7262831Z }, 2025-09-07T07:36:15.7263088Z { 2025-09-07T07:36:15.7263324Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7263740Z "size": 1709, 2025-09-07T07:36:15.7264033Z "digest": "sha256:5ad6977cc38e4ea8a6545d6a4fc0e2fdde705a7af96eb496cfe20f264fbc1e74" 2025-09-07T07:36:15.7264364Z }, 2025-09-07T07:36:15.7264501Z { 2025-09-07T07:36:15.7264716Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7264999Z "size": 724, 2025-09-07T07:36:15.7265282Z "digest": "sha256:da63046995a2e510b7146776371a14bff4b31002cc3ef0322e45a3932fba2031" 2025-09-07T07:36:15.7265592Z }, 2025-09-07T07:36:15.7265726Z { 2025-09-07T07:36:15.7265933Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7266198Z "size": 543, 2025-09-07T07:36:15.7266488Z "digest": "sha256:78243fdb9906cb588921ddaa67a3ca915aa9447ca675faac1a9ebc420a561d83" 2025-09-07T07:36:15.7266804Z }, 2025-09-07T07:36:15.7266938Z { 2025-09-07T07:36:15.7267177Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7267468Z "size": 3395447162, 2025-09-07T07:36:15.7267768Z "digest": "sha256:6f70d5d50abaab8988f460b5590d92b6d1d340575ddee981662c24034d7d20af" 2025-09-07T07:36:15.7268079Z }, 2025-09-07T07:36:15.7268223Z { 2025-09-07T07:36:15.7268448Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7268736Z "size": 32, 2025-09-07T07:36:15.7269018Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7269344Z }, 2025-09-07T07:36:15.7269487Z { 2025-09-07T07:36:15.7269715Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7269987Z "size": 380, 2025-09-07T07:36:15.7270270Z "digest": "sha256:69715d3ad3c493436abde51f5a575e79f7d55b46c653f5607f3c7722ad9a05db" 2025-09-07T07:36:15.7270643Z }, 2025-09-07T07:36:15.7270796Z { 2025-09-07T07:36:15.7271021Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7271309Z "size": 235844, 2025-09-07T07:36:15.7271599Z "digest": "sha256:7ace90c063f3f3ce8f04b541afe935088868930e5c074824af2b2c327779a3b5" 2025-09-07T07:36:15.7271941Z }, 2025-09-07T07:36:15.7272081Z { 2025-09-07T07:36:15.7272319Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7272612Z "size": 230, 2025-09-07T07:36:15.7272905Z "digest": "sha256:acbd5447dd1406dab8e46234f6a034a75ad9794f76c24f817b0ecf28b6a69c78" 2025-09-07T07:36:15.7273233Z }, 2025-09-07T07:36:15.7273375Z { 2025-09-07T07:36:15.7273599Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7273887Z "size": 3396092, 2025-09-07T07:36:15.7274202Z "digest": "sha256:744523d9b7f5a3e7abfc646c2d5222e7379024242430b93cb4b8093574e69022" 2025-09-07T07:36:15.7274511Z }, 2025-09-07T07:36:15.7274652Z { 2025-09-07T07:36:15.7274878Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7275203Z "size": 1477, 2025-09-07T07:36:15.7275483Z "digest": "sha256:5bd615a7b945084e11bcb40190f9d6e50367297237146df7b008fa8c668f29c8" 2025-09-07T07:36:15.7275798Z }, 2025-09-07T07:36:15.7275939Z { 2025-09-07T07:36:15.7276154Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7276431Z "size": 482, 2025-09-07T07:36:15.7276719Z "digest": "sha256:f4986a00e3aecf1d56beaada7aba8c49fbb3683db3c99790ab0aa4caaa34f76f" 2025-09-07T07:36:15.7277040Z }, 2025-09-07T07:36:15.7277175Z { 2025-09-07T07:36:15.7277400Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7277676Z "size": 196, 2025-09-07T07:36:15.7277956Z "digest": "sha256:21902f6e4f8cb76c82e755b8fc9f72e1912bf925ab345ab5b4cc2210f4887a64" 2025-09-07T07:36:15.7278260Z }, 2025-09-07T07:36:15.7278402Z { 2025-09-07T07:36:15.7278624Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7278892Z "size": 608, 2025-09-07T07:36:15.7279216Z "digest": "sha256:d80602abf3ccf0c0b527848a403dfde36e1cf1db1416852385feda5c44bf4363" 2025-09-07T07:36:15.7279528Z }, 2025-09-07T07:36:15.7279670Z { 2025-09-07T07:36:15.7279893Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7280155Z "size": 226, 2025-09-07T07:36:15.7280432Z "digest": "sha256:3c51bf0bc362d34a17911f73c5146cbd668c4d1cf1b944cbf40a604d71cd623a" 2025-09-07T07:36:15.7280733Z }, 2025-09-07T07:36:15.7280874Z { 2025-09-07T07:36:15.7281090Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7281362Z "size": 828, 2025-09-07T07:36:15.7281639Z "digest": "sha256:119ab3bceafa6f2cab4b1f71161195139792990263ee8de82230c6284f0ae20a" 2025-09-07T07:36:15.7281942Z }, 2025-09-07T07:36:15.7282077Z { 2025-09-07T07:36:15.7282300Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7282570Z "size": 32, 2025-09-07T07:36:15.7282853Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7283154Z }, 2025-09-07T07:36:15.7283296Z { 2025-09-07T07:36:15.7283550Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7283842Z "size": 104, 2025-09-07T07:36:15.7284179Z "digest": "sha256:af8eadc9eaabdaf6c5e01031d63061605327153e07568ddd159966ecea75cd07" 2025-09-07T07:36:15.7284490Z }, 2025-09-07T07:36:15.7284631Z { 2025-09-07T07:36:15.7284860Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7285125Z "size": 1495, 2025-09-07T07:36:15.7285403Z "digest": "sha256:e7769b0d7a8262f3cc32a9d96080de5318dac3d2617e10508a167e689016e40c" 2025-09-07T07:36:15.7285705Z }, 2025-09-07T07:36:15.7285846Z { 2025-09-07T07:36:15.7286065Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7286340Z "size": 453908015, 2025-09-07T07:36:15.7286637Z "digest": "sha256:ba263639b0f4634277ef3b8903e3457ac27ce012f1bbeeeeb773191c2c3b222b" 2025-09-07T07:36:15.7287011Z }, 2025-09-07T07:36:15.7287149Z { 2025-09-07T07:36:15.7287381Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7287657Z "size": 164, 2025-09-07T07:36:15.7287941Z "digest": "sha256:a5ab7a280382a797dd5ba6a6716f667a231540ad1e0e7c8ba48bb24d5ab80ef0" 2025-09-07T07:36:15.7288244Z }, 2025-09-07T07:36:15.7288390Z { 2025-09-07T07:36:15.7288619Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7288893Z "size": 346, 2025-09-07T07:36:15.7289166Z "digest": "sha256:80b2232d952f55c3662cffd657ba30fe825f08dfcc5bbea13e2bc6de4482b7e4" 2025-09-07T07:36:15.7289477Z }, 2025-09-07T07:36:15.7289620Z { 2025-09-07T07:36:15.7289846Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7290112Z "size": 32, 2025-09-07T07:36:15.7290391Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7290703Z }, 2025-09-07T07:36:15.7290898Z { 2025-09-07T07:36:15.7291114Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7291385Z "size": 106, 2025-09-07T07:36:15.7291659Z "digest": "sha256:cc93cd65e90f0a9c50194579c93e96897f4e582b9777a1c4d7df7b913ddcdded" 2025-09-07T07:36:15.7291965Z }, 2025-09-07T07:36:15.7292103Z { 2025-09-07T07:36:15.7292326Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7292599Z "size": 425, 2025-09-07T07:36:15.7292878Z "digest": "sha256:0eed4c15712bc470dac7df87e33b3570a1510344019dd9cc0e95b8beb1f98372" 2025-09-07T07:36:15.7293184Z }, 2025-09-07T07:36:15.7293326Z { 2025-09-07T07:36:15.7293550Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7293822Z "size": 19309387, 2025-09-07T07:36:15.7294097Z "digest": "sha256:092516f71fe325518f9737f105bcd65c40cd35c3019098889757e2c84c03c8a8" 2025-09-07T07:36:15.7294379Z }, 2025-09-07T07:36:15.7294557Z { 2025-09-07T07:36:15.7294777Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7295025Z "size": 108, 2025-09-07T07:36:15.7295282Z "digest": "sha256:8c0825014a6270f765ff514da8583d55874f3278bef76e5617e29115f91ee654" 2025-09-07T07:36:15.7295566Z }, 2025-09-07T07:36:15.7295701Z { 2025-09-07T07:36:15.7295908Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7296166Z "size": 636, 2025-09-07T07:36:15.7296520Z "digest": "sha256:8e0d2f63da0a8ff07657d7e06cdbc1ad9d5db95614d640a9f7a9aa8c30c9986d" 2025-09-07T07:36:15.7296820Z }, 2025-09-07T07:36:15.7296949Z { 2025-09-07T07:36:15.7297164Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7297425Z "size": 724, 2025-09-07T07:36:15.7297688Z "digest": "sha256:da63046995a2e510b7146776371a14bff4b31002cc3ef0322e45a3932fba2031" 2025-09-07T07:36:15.7297966Z }, 2025-09-07T07:36:15.7298108Z { 2025-09-07T07:36:15.7298324Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7298588Z "size": 148, 2025-09-07T07:36:15.7298844Z "digest": "sha256:73aae7958ba1a16c5f5625d39b06208e1def8c7816bb75028bf0845f553a5068" 2025-09-07T07:36:15.7299137Z }, 2025-09-07T07:36:15.7299275Z { 2025-09-07T07:36:15.7299485Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7299737Z "size": 136, 2025-09-07T07:36:15.7300001Z "digest": "sha256:ac6077ec9fa50fc0822d387d2ee35e1b6f1f56612402fe7195378180b25087bc" 2025-09-07T07:36:15.7300288Z }, 2025-09-07T07:36:15.7300426Z { 2025-09-07T07:36:15.7300628Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7300883Z "size": 140, 2025-09-07T07:36:15.7301148Z "digest": "sha256:bf4ee4e45e92ef179f7fc64e2c7c6755905a969c37cf82c39aafbadd9290ff04" 2025-09-07T07:36:15.7301437Z }, 2025-09-07T07:36:15.7301566Z { 2025-09-07T07:36:15.7301778Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7302039Z "size": 18617175577, 2025-09-07T07:36:15.7302320Z "digest": "sha256:c1b766f9b961bcc863d6f89d623815fd7dfe9797ddcfd5d15ef06ffe7d177359" 2025-09-07T07:36:15.7302601Z }, 2025-09-07T07:36:15.7302737Z { 2025-09-07T07:36:15.7302952Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7303207Z "size": 223, 2025-09-07T07:36:15.7303471Z "digest": "sha256:6e726ef07b5d5cfe2fb9f06d43fc931fc64c381fd37eaf0c169e0dd84796f152" 2025-09-07T07:36:15.7303768Z }, 2025-09-07T07:36:15.7303905Z { 2025-09-07T07:36:15.7304120Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7304370Z "size": 274477524, 2025-09-07T07:36:15.7304637Z "digest": "sha256:364070434a64fa913f3907ada910a4051707e693e0e6124f57bc97aa57791da1" 2025-09-07T07:36:15.7304915Z }, 2025-09-07T07:36:15.7305050Z { 2025-09-07T07:36:15.7305254Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7305516Z "size": 6451569004, 2025-09-07T07:36:15.7305842Z "digest": "sha256:71f708151a84685fc366b85e914dac9f5279313eff07358d79ecaaeecb0f1c42" 2025-09-07T07:36:15.7306125Z }, 2025-09-07T07:36:15.7306255Z { 2025-09-07T07:36:15.7306472Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7306729Z "size": 129, 2025-09-07T07:36:15.7306995Z "digest": "sha256:622d8cfb39ea4dda608d2819c6a9de45df81b6f8319ee8ab4a24c36d81b9a132" 2025-09-07T07:36:15.7307276Z }, 2025-09-07T07:36:15.7307420Z { 2025-09-07T07:36:15.7307647Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7307921Z "size": 778, 2025-09-07T07:36:15.7308186Z "digest": "sha256:284119a92cb13dacff06926444aab4f99756039acb48abba7b75d35c367ed3f1" 2025-09-07T07:36:15.7308489Z }, 2025-09-07T07:36:15.7308645Z { 2025-09-07T07:36:15.7308856Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7309103Z "size": 724, 2025-09-07T07:36:15.7309425Z "digest": "sha256:da63046995a2e510b7146776371a14bff4b31002cc3ef0322e45a3932fba2031" 2025-09-07T07:36:15.7309716Z }, 2025-09-07T07:36:15.7309850Z { 2025-09-07T07:36:15.7310055Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7310313Z "size": 140, 2025-09-07T07:36:15.7310570Z "digest": "sha256:96695940d842555623cfe4fb7b52e949423e8c8f383e55d02363e7e5c5804afa" 2025-09-07T07:36:15.7310856Z }, 2025-09-07T07:36:15.7310987Z { 2025-09-07T07:36:15.7311203Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7311460Z "size": 32, 2025-09-07T07:36:15.7311735Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7312014Z }, 2025-09-07T07:36:15.7312147Z { 2025-09-07T07:36:15.7312354Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7312606Z "size": 160, 2025-09-07T07:36:15.7312853Z "digest": "sha256:7ddca6c4c050460204097ba875dc0fa03eca6265122a18c0b8dc5504152aea53" 2025-09-07T07:36:15.7313137Z }, 2025-09-07T07:36:15.7313265Z { 2025-09-07T07:36:15.7313476Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7313721Z "size": 1012, 2025-09-07T07:36:15.7313992Z "digest": "sha256:a95e1f2f1aadef03514a7cdbdac1fe83d4eebedbb80df9be868a223f27e1c263" 2025-09-07T07:36:15.7314289Z }, 2025-09-07T07:36:15.7314422Z { 2025-09-07T07:36:15.7314623Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7314886Z "size": 724, 2025-09-07T07:36:15.7315141Z "digest": "sha256:da63046995a2e510b7146776371a14bff4b31002cc3ef0322e45a3932fba2031" 2025-09-07T07:36:15.7315445Z }, 2025-09-07T07:36:15.7315579Z { 2025-09-07T07:36:15.7315804Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7316076Z "size": 135, 2025-09-07T07:36:15.7316353Z "digest": "sha256:8085756b0cc0f9588f23a73c27840a5dff48cc18c3a2f0311e4d1ef291855679" 2025-09-07T07:36:15.7316655Z }, 2025-09-07T07:36:15.7316779Z { 2025-09-07T07:36:15.7316988Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7317248Z "size": 32, 2025-09-07T07:36:15.7317503Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7317772Z }, 2025-09-07T07:36:15.7317902Z { 2025-09-07T07:36:15.7318104Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7318348Z "size": 158, 2025-09-07T07:36:15.7318590Z "digest": "sha256:7e9ff0c6f103b18756f01c60b4d57a951660f17bffb1810b330e3ff703caf216" 2025-09-07T07:36:15.7318869Z }, 2025-09-07T07:36:15.7318999Z { 2025-09-07T07:36:15.7319200Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7319438Z "size": 1369, 2025-09-07T07:36:15.7319698Z "digest": "sha256:a625cbbc05b983aeb4c28702a4a5b65c68191ab1b8d17978f7d98cc17ddf3c52" 2025-09-07T07:36:15.7319976Z }, 2025-09-07T07:36:15.7320153Z { 2025-09-07T07:36:15.7320356Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7320607Z "size": 32, 2025-09-07T07:36:15.7320865Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7321149Z }, 2025-09-07T07:36:15.7321274Z { 2025-09-07T07:36:15.7321480Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7321736Z "size": 136, 2025-09-07T07:36:15.7321991Z "digest": "sha256:4e28486424310870c8d6815524440f17c6e0afe7572eaa173a811b98b4920bed" 2025-09-07T07:36:15.7322272Z }, 2025-09-07T07:36:15.7322408Z { 2025-09-07T07:36:15.7322620Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7322878Z "size": 380, 2025-09-07T07:36:15.7323134Z "digest": "sha256:5e944f1ed1bef9442f5b1b86225d3958ea8f2f7f4c6aa7b92dc5d0c810c260bc" 2025-09-07T07:36:15.7323424Z }, 2025-09-07T07:36:15.7323560Z { 2025-09-07T07:36:15.7323814Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7324068Z "size": 32, 2025-09-07T07:36:15.7324336Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7324626Z }, 2025-09-07T07:36:15.7324760Z { 2025-09-07T07:36:15.7324963Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7325220Z "size": 104, 2025-09-07T07:36:15.7325481Z "digest": "sha256:41619248f604c60e038a02bfd462af96ee2996b77be5f59f05e9ac5fe4790e5a" 2025-09-07T07:36:15.7325769Z }, 2025-09-07T07:36:15.7325896Z { 2025-09-07T07:36:15.7326107Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7326364Z "size": 407, 2025-09-07T07:36:15.7326632Z "digest": "sha256:be86f8c4f654b9ae64a20eb7f960e6ce4baa5b46e0a1f5e1312b11492a40bcd4" 2025-09-07T07:36:15.7327034Z }, 2025-09-07T07:36:15.7327183Z { 2025-09-07T07:36:15.7327422Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7327720Z "size": 32, 2025-09-07T07:36:15.7328020Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7328328Z }, 2025-09-07T07:36:15.7328473Z { 2025-09-07T07:36:15.7328702Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7328956Z "size": 109, 2025-09-07T07:36:15.7329227Z "digest": "sha256:ef1340e22a4bc8cf42e1d40961cb32d183cd3da8f0b785b5425c32ee067690c1" 2025-09-07T07:36:15.7329524Z }, 2025-09-07T07:36:15.7329658Z { 2025-09-07T07:36:15.7329862Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7330122Z "size": 1897, 2025-09-07T07:36:15.7330391Z "digest": "sha256:da8d8b696333cbf6b9f339ab859639c905d6752d7e65fea14c23c3c2dcba553e" 2025-09-07T07:36:15.7330681Z }, 2025-09-07T07:36:15.7330808Z { 2025-09-07T07:36:15.7331020Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7331281Z "size": 243443118, 2025-09-07T07:36:15.7331557Z "digest": "sha256:386b0c49c4982a821fb6f427fbc7d9c7d2012e97c96a514a9c7a09304e76b935" 2025-09-07T07:36:15.7331841Z }, 2025-09-07T07:36:15.7331986Z { 2025-09-07T07:36:15.7332214Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7332490Z "size": 106, 2025-09-07T07:36:15.7332758Z "digest": "sha256:2b1d0ea7efe0bf86e86df804d2cddbf83b113fdecd03f3ddfca728da30546f34" 2025-09-07T07:36:15.7333060Z }, 2025-09-07T07:36:15.7333196Z { 2025-09-07T07:36:15.7333406Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7333654Z "size": 163, 2025-09-07T07:36:15.7333916Z "digest": "sha256:04c04be7408f20625b1bd8454e5a08c91fcf04d4f79ab3ec1b75ae6b1824174d" 2025-09-07T07:36:15.7334206Z }, 2025-09-07T07:36:15.7334382Z { 2025-09-07T07:36:15.7334596Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7334859Z "size": 7943, 2025-09-07T07:36:15.7335145Z "digest": "sha256:f8690caa3ac5e845f2dcc25ad12815b5c7452285c3838a87c780bd03ecf072a3" 2025-09-07T07:36:15.7335505Z }, 2025-09-07T07:36:15.7335639Z { 2025-09-07T07:36:15.7335866Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7336146Z "size": 8074, 2025-09-07T07:36:15.7336428Z "digest": "sha256:2908d6baaa6b21331dee5f210472cae0874d22b98b0a35420cad4fd753ed215f" 2025-09-07T07:36:15.7336737Z }, 2025-09-07T07:36:15.7336884Z { 2025-09-07T07:36:15.7337118Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7337409Z "size": 303, 2025-09-07T07:36:15.7337661Z "digest": "sha256:37e2336101eba2c73995d34431e4fae8782d9e9700c42621777922490b2158ed" 2025-09-07T07:36:15.7337944Z }, 2025-09-07T07:36:15.7338080Z { 2025-09-07T07:36:15.7338295Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7338545Z "size": 32, 2025-09-07T07:36:15.7338858Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7339156Z }, 2025-09-07T07:36:15.7339294Z { 2025-09-07T07:36:15.7339502Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7339758Z "size": 108, 2025-09-07T07:36:15.7340012Z "digest": "sha256:f1ac881fde33994861be4324231269058643168b9aee60c699552d0d92d965da" 2025-09-07T07:36:15.7340296Z }, 2025-09-07T07:36:15.7340425Z { 2025-09-07T07:36:15.7340637Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7340896Z "size": 54145699, 2025-09-07T07:36:15.7341164Z "digest": "sha256:43b14c67347e2813c5f63e928c14db60dbb35c330ccc865510cf79739d8b78a1" 2025-09-07T07:36:15.7341443Z }, 2025-09-07T07:36:15.7341578Z { 2025-09-07T07:36:15.7341791Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2025-09-07T07:36:15.7342049Z "size": 32, 2025-09-07T07:36:15.7342311Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2025-09-07T07:36:15.7342607Z } 2025-09-07T07:36:15.7342743Z ] 2025-09-07T07:36:15.7342883Z } 2025-09-07T07:36:15.7343047Z + exit 0 2025-09-07T07:36:15.7364205Z ##[group]Run set -eux 2025-09-07T07:36:15.7364431Z set -eux 2025-09-07T07:36:15.7364726Z # It's ok if this steps fails, it would then be an anonymous user like what we used to have 2025-09-07T07:36:15.7365455Z aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token | jq --raw-output '.SecretString' | jq -r .docker_hub_readonly_token | docker login --username pytorchbot --password-stdin || true 2025-09-07T07:36:15.7372359Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:15.7372601Z env: 2025-09-07T07:36:15.7372766Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:15.7372952Z ##[endgroup] 2025-09-07T07:36:15.7400984Z + aws secretsmanager get-secret-value --secret-id docker_hub_readonly_token 2025-09-07T07:36:15.7401377Z + jq --raw-output .SecretString 2025-09-07T07:36:15.7401634Z + jq -r .docker_hub_readonly_token 2025-09-07T07:36:15.7401929Z + docker login --username pytorchbot --password-stdin 2025-09-07T07:36:16.1968721Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-09-07T07:36:16.1969083Z Login Succeeded 2025-09-07T07:36:16.1969319Z Configure a credential helper to remove this warning. See 2025-09-07T07:36:16.1969686Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-09-07T07:36:16.1969923Z 2025-09-07T07:36:16.2041200Z ##[group]Run tag=${ECR_DOCKER_IMAGE##*:} 2025-09-07T07:36:16.2041479Z tag=${ECR_DOCKER_IMAGE##*:} 2025-09-07T07:36:16.2041743Z echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" 2025-09-07T07:36:16.2046904Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:16.2047212Z env: 2025-09-07T07:36:16.2047393Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:16.2047996Z ECR_DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:16.2048690Z ##[endgroup] 2025-09-07T07:36:16.2073021Z docker pull ghcr.io/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:16.2107752Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2025-09-07T07:36:16.2108068Z with: 2025-09-07T07:36:16.2108618Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:16.2109254Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:16.2109522Z env: 2025-09-07T07:36:16.2109698Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:16.2109894Z ##[endgroup] 2025-09-07T07:36:16.2225386Z ##[group]Run set -x 2025-09-07T07:36:16.2225612Z set -x 2025-09-07T07:36:16.2225779Z set +e 2025-09-07T07:36:16.2225939Z  2025-09-07T07:36:16.2226108Z login() { 2025-09-07T07:36:16.2226431Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2025-09-07T07:36:16.2226768Z } 2025-09-07T07:36:16.2226922Z  2025-09-07T07:36:16.2227115Z retry () { 2025-09-07T07:36:16.2227301Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2025-09-07T07:36:16.2227497Z } 2025-09-07T07:36:16.2227637Z  2025-09-07T07:36:16.2227794Z retry login "${DOCKER_REGISTRY}" 2025-09-07T07:36:16.2227984Z  2025-09-07T07:36:16.2228270Z IMAGE_SIZE=$(docker manifest inspect "${DOCKER_IMAGE}" | jq '[.layers[].size, .config.size] | add / 1024 / 1024') 2025-09-07T07:36:16.2228690Z echo "Compressed size of image in MB: ${IMAGE_SIZE}" 2025-09-07T07:36:16.2228950Z  2025-09-07T07:36:16.2229096Z set -e 2025-09-07T07:36:16.2229319Z # ignore output since only exit code is used for conditional 2025-09-07T07:36:16.2229632Z # only pull docker image if it's not available locally 2025-09-07T07:36:16.2229981Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2025-09-07T07:36:16.2230312Z  retry docker pull "${DOCKER_IMAGE}" 2025-09-07T07:36:16.2230534Z fi 2025-09-07T07:36:16.2234926Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:36:16.2235167Z env: 2025-09-07T07:36:16.2235330Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:36:16.2235866Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:16.2236447Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:16.2236851Z ##[endgroup] 2025-09-07T07:36:16.2262086Z + set +e 2025-09-07T07:36:16.2266909Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:16.2272024Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:16.2276345Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2025-09-07T07:36:16.2279003Z + aws ecr get-login-password --region us-east-1 2025-09-07T07:36:16.6518012Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2025-09-07T07:36:16.6518372Z Login Succeeded 2025-09-07T07:36:16.6518791Z Configure a credential helper to remove this warning. See 2025-09-07T07:36:16.6519936Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2025-09-07T07:36:16.6520353Z 2025-09-07T07:36:16.6539259Z ++ docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:16.6539933Z ++ jq '[.layers[].size, .config.size] | add / 1024 / 1024' 2025-09-07T07:36:16.8460208Z + IMAGE_SIZE=28579.020259857178 2025-09-07T07:36:16.8460572Z Compressed size of image in MB: 28579.020259857178 2025-09-07T07:36:16.8461491Z + echo 'Compressed size of image in MB: 28579.020259857178' 2025-09-07T07:36:16.8461877Z + set -e 2025-09-07T07:36:16.8462676Z + docker inspect --type=image 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:16.8616078Z + retry docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:16.8617197Z + docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:36:17.1037502Z pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77: Pulling from pytorch/ci-image 2025-09-07T07:36:17.1038329Z e6fdc8487bfe: Pulling fs layer 2025-09-07T07:36:17.1039179Z 18a5ee5b0e2e: Pulling fs layer 2025-09-07T07:36:17.1039535Z 572424b92528: Pulling fs layer 2025-09-07T07:36:17.1039797Z 1c35b7d4b67c: Pulling fs layer 2025-09-07T07:36:17.1040008Z 68c20f3c23bb: Pulling fs layer 2025-09-07T07:36:17.1040210Z 7efa39950d32: Pulling fs layer 2025-09-07T07:36:17.1040434Z a10eb16a7271: Pulling fs layer 2025-09-07T07:36:17.1040639Z 7d52cf579654: Pulling fs layer 2025-09-07T07:36:17.1040851Z cb6a20fcf4e2: Pulling fs layer 2025-09-07T07:36:17.1041060Z 46fb6a8b3e1d: Pulling fs layer 2025-09-07T07:36:17.1041276Z 5ad6977cc38e: Pulling fs layer 2025-09-07T07:36:17.1041478Z da63046995a2: Pulling fs layer 2025-09-07T07:36:17.1041684Z 78243fdb9906: Pulling fs layer 2025-09-07T07:36:17.1041903Z 6f70d5d50aba: Pulling fs layer 2025-09-07T07:36:17.1042112Z 4f4fb700ef54: Pulling fs layer 2025-09-07T07:36:17.1042310Z 69715d3ad3c4: Pulling fs layer 2025-09-07T07:36:17.1042515Z 7ace90c063f3: Pulling fs layer 2025-09-07T07:36:17.1042720Z acbd5447dd14: Pulling fs layer 2025-09-07T07:36:17.1042925Z 744523d9b7f5: Pulling fs layer 2025-09-07T07:36:17.1043134Z 5bd615a7b945: Pulling fs layer 2025-09-07T07:36:17.1043340Z f4986a00e3ae: Pulling fs layer 2025-09-07T07:36:17.1043546Z 21902f6e4f8c: Pulling fs layer 2025-09-07T07:36:17.1043749Z d80602abf3cc: Pulling fs layer 2025-09-07T07:36:17.1043955Z 3c51bf0bc362: Pulling fs layer 2025-09-07T07:36:17.1044164Z 119ab3bceafa: Pulling fs layer 2025-09-07T07:36:17.1044370Z af8eadc9eaab: Pulling fs layer 2025-09-07T07:36:17.1044576Z e7769b0d7a82: Pulling fs layer 2025-09-07T07:36:17.1044774Z ba263639b0f4: Pulling fs layer 2025-09-07T07:36:17.1044980Z a5ab7a280382: Pulling fs layer 2025-09-07T07:36:17.1045392Z 80b2232d952f: Pulling fs layer 2025-09-07T07:36:17.1045606Z cc93cd65e90f: Pulling fs layer 2025-09-07T07:36:17.1045805Z 0eed4c15712b: Pulling fs layer 2025-09-07T07:36:17.1046017Z 092516f71fe3: Pulling fs layer 2025-09-07T07:36:17.1046221Z 8c0825014a62: Pulling fs layer 2025-09-07T07:36:17.1046430Z 8e0d2f63da0a: Pulling fs layer 2025-09-07T07:36:17.1046635Z 73aae7958ba1: Pulling fs layer 2025-09-07T07:36:17.1047143Z ac6077ec9fa5: Pulling fs layer 2025-09-07T07:36:17.1047459Z bf4ee4e45e92: Pulling fs layer 2025-09-07T07:36:17.1047675Z c1b766f9b961: Pulling fs layer 2025-09-07T07:36:17.1047873Z 6e726ef07b5d: Pulling fs layer 2025-09-07T07:36:17.1048085Z 364070434a64: Pulling fs layer 2025-09-07T07:36:17.1048285Z 71f708151a84: Pulling fs layer 2025-09-07T07:36:17.1048493Z 622d8cfb39ea: Pulling fs layer 2025-09-07T07:36:17.1048705Z 284119a92cb1: Pulling fs layer 2025-09-07T07:36:17.1048897Z 96695940d842: Pulling fs layer 2025-09-07T07:36:17.1049093Z 7ddca6c4c050: Pulling fs layer 2025-09-07T07:36:17.1049355Z a95e1f2f1aad: Pulling fs layer 2025-09-07T07:36:17.1049561Z 8085756b0cc0: Pulling fs layer 2025-09-07T07:36:17.1049760Z 7e9ff0c6f103: Pulling fs layer 2025-09-07T07:36:17.1049961Z a625cbbc05b9: Pulling fs layer 2025-09-07T07:36:17.1050162Z 5bd615a7b945: Waiting 2025-09-07T07:36:17.1050361Z 4e2848642431: Pulling fs layer 2025-09-07T07:36:17.1050560Z 5e944f1ed1be: Pulling fs layer 2025-09-07T07:36:17.1051050Z f4986a00e3ae: Waiting 2025-09-07T07:36:17.1051229Z 21902f6e4f8c: Waiting 2025-09-07T07:36:17.1051398Z d80602abf3cc: Waiting 2025-09-07T07:36:17.1051577Z 41619248f604: Pulling fs layer 2025-09-07T07:36:17.1051788Z 3c51bf0bc362: Waiting 2025-09-07T07:36:17.1052087Z be86f8c4f654: Pulling fs layer 2025-09-07T07:36:17.1052297Z ef1340e22a4b: Pulling fs layer 2025-09-07T07:36:17.1052506Z 119ab3bceafa: Waiting 2025-09-07T07:36:17.1052686Z af8eadc9eaab: Waiting 2025-09-07T07:36:17.1052871Z da8d8b696333: Pulling fs layer 2025-09-07T07:36:17.1053062Z 386b0c49c498: Pulling fs layer 2025-09-07T07:36:17.1053284Z 2b1d0ea7efe0: Pulling fs layer 2025-09-07T07:36:17.1053507Z e7769b0d7a82: Waiting 2025-09-07T07:36:17.1053709Z 04c04be7408f: Pulling fs layer 2025-09-07T07:36:17.1053901Z ba263639b0f4: Waiting 2025-09-07T07:36:17.1054074Z f8690caa3ac5: Pulling fs layer 2025-09-07T07:36:17.1054272Z 2908d6baaa6b: Pulling fs layer 2025-09-07T07:36:17.1084992Z 7efa39950d32: Waiting 2025-09-07T07:36:17.1085314Z a5ab7a280382: Waiting 2025-09-07T07:36:17.1085530Z a10eb16a7271: Waiting 2025-09-07T07:36:17.1085822Z 37e2336101eb: Pulling fs layer 2025-09-07T07:36:17.1086076Z 7d52cf579654: Waiting 2025-09-07T07:36:17.1086340Z cb6a20fcf4e2: Waiting 2025-09-07T07:36:17.1086536Z 80b2232d952f: Waiting 2025-09-07T07:36:17.1087073Z f1ac881fde33: Pulling fs layer 2025-09-07T07:36:17.1087323Z cc93cd65e90f: Waiting 2025-09-07T07:36:17.1087530Z 43b14c67347e: Pulling fs layer 2025-09-07T07:36:17.1087762Z 46fb6a8b3e1d: Waiting 2025-09-07T07:36:17.1087946Z 8e0d2f63da0a: Waiting 2025-09-07T07:36:17.1088131Z 5ad6977cc38e: Waiting 2025-09-07T07:36:17.1088295Z 0eed4c15712b: Waiting 2025-09-07T07:36:17.1088453Z 73aae7958ba1: Waiting 2025-09-07T07:36:17.1088626Z 092516f71fe3: Waiting 2025-09-07T07:36:17.1088816Z ac6077ec9fa5: Waiting 2025-09-07T07:36:17.1089004Z 4e2848642431: Waiting 2025-09-07T07:36:17.1089200Z da63046995a2: Waiting 2025-09-07T07:36:17.1089365Z 5e944f1ed1be: Waiting 2025-09-07T07:36:17.1089527Z 78243fdb9906: Waiting 2025-09-07T07:36:17.1089696Z 8c0825014a62: Waiting 2025-09-07T07:36:17.1089860Z be86f8c4f654: Waiting 2025-09-07T07:36:17.1090019Z 41619248f604: Waiting 2025-09-07T07:36:17.1090172Z c1b766f9b961: Waiting 2025-09-07T07:36:17.1090335Z 2908d6baaa6b: Waiting 2025-09-07T07:36:17.1090506Z 6e726ef07b5d: Waiting 2025-09-07T07:36:17.1090667Z 364070434a64: Waiting 2025-09-07T07:36:17.1090835Z ef1340e22a4b: Waiting 2025-09-07T07:36:17.1091002Z 71f708151a84: Waiting 2025-09-07T07:36:17.1091174Z 6f70d5d50aba: Waiting 2025-09-07T07:36:17.1091337Z da8d8b696333: Waiting 2025-09-07T07:36:17.1091504Z 4f4fb700ef54: Waiting 2025-09-07T07:36:17.1091672Z 386b0c49c498: Waiting 2025-09-07T07:36:17.1091843Z 37e2336101eb: Waiting 2025-09-07T07:36:17.1092007Z 622d8cfb39ea: Waiting 2025-09-07T07:36:17.1092177Z 69715d3ad3c4: Waiting 2025-09-07T07:36:17.1092346Z 284119a92cb1: Waiting 2025-09-07T07:36:17.1092510Z 96695940d842: Waiting 2025-09-07T07:36:17.1092669Z 2b1d0ea7efe0: Waiting 2025-09-07T07:36:17.1092836Z 04c04be7408f: Waiting 2025-09-07T07:36:17.1093006Z f8690caa3ac5: Waiting 2025-09-07T07:36:17.1093180Z bf4ee4e45e92: Waiting 2025-09-07T07:36:17.1093336Z a625cbbc05b9: Waiting 2025-09-07T07:36:17.1093501Z 7ddca6c4c050: Waiting 2025-09-07T07:36:17.1093666Z f1ac881fde33: Waiting 2025-09-07T07:36:17.1115019Z a95e1f2f1aad: Waiting 2025-09-07T07:36:17.1115206Z 7e9ff0c6f103: Waiting 2025-09-07T07:36:17.1115393Z 7ace90c063f3: Waiting 2025-09-07T07:36:17.1115562Z 68c20f3c23bb: Waiting 2025-09-07T07:36:17.1115728Z 744523d9b7f5: Waiting 2025-09-07T07:36:17.1115888Z acbd5447dd14: Waiting 2025-09-07T07:36:17.1116052Z 43b14c67347e: Waiting 2025-09-07T07:36:17.1116215Z 1c35b7d4b67c: Waiting 2025-09-07T07:36:17.2970925Z 1c35b7d4b67c: Download complete 2025-09-07T07:36:17.3825223Z 68c20f3c23bb: Verifying Checksum 2025-09-07T07:36:17.3830994Z 68c20f3c23bb: Download complete 2025-09-07T07:36:17.4732865Z 7efa39950d32: Verifying Checksum 2025-09-07T07:36:17.4737177Z 7efa39950d32: Download complete 2025-09-07T07:36:17.4782863Z e6fdc8487bfe: Verifying Checksum 2025-09-07T07:36:17.4783362Z e6fdc8487bfe: Download complete 2025-09-07T07:36:17.5577179Z 7d52cf579654: Verifying Checksum 2025-09-07T07:36:17.5577513Z 7d52cf579654: Download complete 2025-09-07T07:36:17.5622622Z a10eb16a7271: Verifying Checksum 2025-09-07T07:36:17.5623176Z a10eb16a7271: Download complete 2025-09-07T07:36:17.6561116Z 46fb6a8b3e1d: Verifying Checksum 2025-09-07T07:36:17.6561443Z 46fb6a8b3e1d: Download complete 2025-09-07T07:36:17.7217605Z 5ad6977cc38e: Verifying Checksum 2025-09-07T07:36:17.7217929Z 5ad6977cc38e: Download complete 2025-09-07T07:36:17.8276623Z da63046995a2: Download complete 2025-09-07T07:36:17.8884690Z 78243fdb9906: Verifying Checksum 2025-09-07T07:36:17.8884994Z 78243fdb9906: Download complete 2025-09-07T07:36:18.6633617Z e6fdc8487bfe: Pull complete 2025-09-07T07:36:18.6765152Z 18a5ee5b0e2e: Pull complete 2025-09-07T07:36:18.7111791Z cb6a20fcf4e2: Verifying Checksum 2025-09-07T07:36:18.7116354Z cb6a20fcf4e2: Download complete 2025-09-07T07:36:18.7196705Z 4f4fb700ef54: Verifying Checksum 2025-09-07T07:36:18.7197241Z 4f4fb700ef54: Download complete 2025-09-07T07:36:18.7937281Z 69715d3ad3c4: Verifying Checksum 2025-09-07T07:36:18.7940630Z 69715d3ad3c4: Download complete 2025-09-07T07:36:18.8900996Z 7ace90c063f3: Download complete 2025-09-07T07:36:18.9619382Z acbd5447dd14: Verifying Checksum 2025-09-07T07:36:18.9622182Z acbd5447dd14: Download complete 2025-09-07T07:36:19.0571876Z 744523d9b7f5: Verifying Checksum 2025-09-07T07:36:19.0572422Z 744523d9b7f5: Download complete 2025-09-07T07:36:19.1561831Z 5bd615a7b945: Verifying Checksum 2025-09-07T07:36:19.1564410Z 5bd615a7b945: Download complete 2025-09-07T07:36:19.2345511Z f4986a00e3ae: Download complete 2025-09-07T07:36:19.3060698Z 21902f6e4f8c: Verifying Checksum 2025-09-07T07:36:19.3061211Z 21902f6e4f8c: Download complete 2025-09-07T07:36:19.3936678Z d80602abf3cc: Verifying Checksum 2025-09-07T07:36:19.3937186Z d80602abf3cc: Download complete 2025-09-07T07:36:19.4560413Z 3c51bf0bc362: Verifying Checksum 2025-09-07T07:36:19.4560768Z 3c51bf0bc362: Download complete 2025-09-07T07:36:19.5540852Z 119ab3bceafa: Download complete 2025-09-07T07:36:19.6484675Z af8eadc9eaab: Verifying Checksum 2025-09-07T07:36:19.7320084Z e7769b0d7a82: Verifying Checksum 2025-09-07T07:36:19.7320591Z e7769b0d7a82: Download complete 2025-09-07T07:36:20.2885153Z 572424b92528: Download complete 2025-09-07T07:36:20.3523587Z a5ab7a280382: Verifying Checksum 2025-09-07T07:36:20.3524093Z a5ab7a280382: Download complete 2025-09-07T07:36:20.4258376Z 80b2232d952f: Download complete 2025-09-07T07:36:20.5321245Z cc93cd65e90f: Verifying Checksum 2025-09-07T07:36:20.5324590Z cc93cd65e90f: Download complete 2025-09-07T07:36:20.6065572Z 0eed4c15712b: Verifying Checksum 2025-09-07T07:36:20.6065894Z 0eed4c15712b: Download complete 2025-09-07T07:36:20.8769981Z 092516f71fe3: Verifying Checksum 2025-09-07T07:36:20.8773925Z 092516f71fe3: Download complete 2025-09-07T07:36:20.9667128Z 8c0825014a62: Download complete 2025-09-07T07:36:21.0486448Z 8e0d2f63da0a: Verifying Checksum 2025-09-07T07:36:21.0486971Z 8e0d2f63da0a: Download complete 2025-09-07T07:36:21.1484216Z 73aae7958ba1: Verifying Checksum 2025-09-07T07:36:21.1484537Z 73aae7958ba1: Download complete 2025-09-07T07:36:21.2427396Z ac6077ec9fa5: Verifying Checksum 2025-09-07T07:36:21.2427768Z ac6077ec9fa5: Download complete 2025-09-07T07:36:21.3157913Z bf4ee4e45e92: Verifying Checksum 2025-09-07T07:36:21.3158233Z bf4ee4e45e92: Download complete 2025-09-07T07:36:24.3204866Z ba263639b0f4: Verifying Checksum 2025-09-07T07:36:24.3205193Z ba263639b0f4: Download complete 2025-09-07T07:36:24.3887097Z 6e726ef07b5d: Verifying Checksum 2025-09-07T07:36:24.3891975Z 6e726ef07b5d: Download complete 2025-09-07T07:36:27.1839107Z 364070434a64: Verifying Checksum 2025-09-07T07:36:27.1841159Z 364070434a64: Download complete 2025-09-07T07:36:31.7190333Z 572424b92528: Pull complete 2025-09-07T07:36:31.9894364Z 1c35b7d4b67c: Pull complete 2025-09-07T07:36:32.2942707Z 68c20f3c23bb: Pull complete 2025-09-07T07:36:32.5620833Z 7efa39950d32: Pull complete 2025-09-07T07:36:32.9793213Z a10eb16a7271: Pull complete 2025-09-07T07:36:33.2970444Z 7d52cf579654: Pull complete 2025-09-07T07:36:37.0702237Z cb6a20fcf4e2: Pull complete 2025-09-07T07:36:37.5546422Z 46fb6a8b3e1d: Pull complete 2025-09-07T07:36:37.9023898Z 5ad6977cc38e: Pull complete 2025-09-07T07:36:38.3053903Z da63046995a2: Pull complete 2025-09-07T07:36:38.6301035Z 78243fdb9906: Pull complete 2025-09-07T07:36:51.8939620Z 6f70d5d50aba: Verifying Checksum 2025-09-07T07:36:51.8939911Z 6f70d5d50aba: Download complete 2025-09-07T07:36:51.9950737Z 622d8cfb39ea: Verifying Checksum 2025-09-07T07:36:51.9951215Z 622d8cfb39ea: Download complete 2025-09-07T07:36:52.0779160Z 284119a92cb1: Verifying Checksum 2025-09-07T07:36:52.0781424Z 284119a92cb1: Download complete 2025-09-07T07:36:52.1642844Z 96695940d842: Verifying Checksum 2025-09-07T07:36:52.1643173Z 96695940d842: Download complete 2025-09-07T07:36:52.2535364Z 7ddca6c4c050: Verifying Checksum 2025-09-07T07:36:52.2535929Z 7ddca6c4c050: Download complete 2025-09-07T07:36:52.3314140Z a95e1f2f1aad: Verifying Checksum 2025-09-07T07:36:52.3314600Z a95e1f2f1aad: Download complete 2025-09-07T07:36:52.4078249Z 8085756b0cc0: Download complete 2025-09-07T07:36:52.5071938Z 7e9ff0c6f103: Verifying Checksum 2025-09-07T07:36:52.5072362Z 7e9ff0c6f103: Download complete 2025-09-07T07:36:52.5831355Z a625cbbc05b9: Verifying Checksum 2025-09-07T07:36:52.5831904Z a625cbbc05b9: Download complete 2025-09-07T07:36:52.6702818Z 4e2848642431: Download complete 2025-09-07T07:36:52.7486624Z 5e944f1ed1be: Verifying Checksum 2025-09-07T07:36:52.7487291Z 5e944f1ed1be: Download complete 2025-09-07T07:36:52.8487001Z 41619248f604: Verifying Checksum 2025-09-07T07:36:52.8487365Z 41619248f604: Download complete 2025-09-07T07:36:52.9178383Z be86f8c4f654: Verifying Checksum 2025-09-07T07:36:52.9178697Z be86f8c4f654: Download complete 2025-09-07T07:36:52.9981991Z ef1340e22a4b: Download complete 2025-09-07T07:36:53.0914632Z da8d8b696333: Verifying Checksum 2025-09-07T07:36:53.0915223Z da8d8b696333: Download complete 2025-09-07T07:36:55.6082655Z 386b0c49c498: Verifying Checksum 2025-09-07T07:36:55.6082976Z 386b0c49c498: Download complete 2025-09-07T07:36:55.6982277Z 2b1d0ea7efe0: Verifying Checksum 2025-09-07T07:36:55.6982777Z 2b1d0ea7efe0: Download complete 2025-09-07T07:36:55.7626025Z 04c04be7408f: Download complete 2025-09-07T07:36:55.8557952Z f8690caa3ac5: Verifying Checksum 2025-09-07T07:36:55.8562565Z f8690caa3ac5: Download complete 2025-09-07T07:36:55.9483204Z 2908d6baaa6b: Download complete 2025-09-07T07:36:56.0294552Z 37e2336101eb: Verifying Checksum 2025-09-07T07:36:56.0300557Z 37e2336101eb: Download complete 2025-09-07T07:36:56.1268257Z f1ac881fde33: Verifying Checksum 2025-09-07T07:36:56.1268580Z f1ac881fde33: Download complete 2025-09-07T07:36:56.7228288Z 43b14c67347e: Verifying Checksum 2025-09-07T07:36:56.7228783Z 43b14c67347e: Download complete 2025-09-07T07:37:31.7709588Z 71f708151a84: Verifying Checksum 2025-09-07T07:37:31.7709929Z 71f708151a84: Download complete 2025-09-07T07:38:08.7962741Z 6f70d5d50aba: Pull complete 2025-09-07T07:38:09.1544580Z 4f4fb700ef54: Pull complete 2025-09-07T07:38:09.4722958Z 69715d3ad3c4: Pull complete 2025-09-07T07:38:09.8365256Z 7ace90c063f3: Pull complete 2025-09-07T07:38:10.1726045Z acbd5447dd14: Pull complete 2025-09-07T07:38:10.6162316Z 744523d9b7f5: Pull complete 2025-09-07T07:38:10.9114134Z 5bd615a7b945: Pull complete 2025-09-07T07:38:11.1506055Z f4986a00e3ae: Pull complete 2025-09-07T07:38:11.3347213Z 21902f6e4f8c: Pull complete 2025-09-07T07:38:11.5854778Z d80602abf3cc: Pull complete 2025-09-07T07:38:11.8509901Z 3c51bf0bc362: Pull complete 2025-09-07T07:38:12.2077419Z 119ab3bceafa: Pull complete 2025-09-07T07:38:12.5247163Z af8eadc9eaab: Pull complete 2025-09-07T07:38:12.7421190Z e7769b0d7a82: Pull complete 2025-09-07T07:38:23.8247660Z ba263639b0f4: Pull complete 2025-09-07T07:38:24.0954335Z a5ab7a280382: Pull complete 2025-09-07T07:38:24.5434520Z 80b2232d952f: Pull complete 2025-09-07T07:38:25.2432539Z cc93cd65e90f: Pull complete 2025-09-07T07:38:25.5241968Z 0eed4c15712b: Pull complete 2025-09-07T07:38:26.2231633Z 092516f71fe3: Pull complete 2025-09-07T07:38:26.7084028Z 8c0825014a62: Pull complete 2025-09-07T07:38:27.1564298Z 8e0d2f63da0a: Pull complete 2025-09-07T07:38:27.9448423Z 73aae7958ba1: Pull complete 2025-09-07T07:38:28.3829240Z ac6077ec9fa5: Pull complete 2025-09-07T07:38:28.7212157Z bf4ee4e45e92: Pull complete 2025-09-07T07:39:27.5448345Z c1b766f9b961: Verifying Checksum 2025-09-07T07:39:27.5448831Z c1b766f9b961: Download complete 2025-09-07T07:43:33.9563129Z c1b766f9b961: Pull complete 2025-09-07T07:43:34.4307233Z 6e726ef07b5d: Pull complete 2025-09-07T07:43:37.1981241Z 364070434a64: Pull complete 2025-09-07T07:46:02.7905785Z 71f708151a84: Pull complete 2025-09-07T07:46:02.8762836Z 622d8cfb39ea: Pull complete 2025-09-07T07:46:03.1201089Z 284119a92cb1: Pull complete 2025-09-07T07:46:03.7969336Z 96695940d842: Pull complete 2025-09-07T07:46:04.3052060Z 7ddca6c4c050: Pull complete 2025-09-07T07:46:04.5209573Z a95e1f2f1aad: Pull complete 2025-09-07T07:46:05.0201195Z 8085756b0cc0: Pull complete 2025-09-07T07:46:05.7525999Z 7e9ff0c6f103: Pull complete 2025-09-07T07:46:06.2552551Z a625cbbc05b9: Pull complete 2025-09-07T07:46:07.1885528Z 4e2848642431: Pull complete 2025-09-07T07:46:07.6894918Z 5e944f1ed1be: Pull complete 2025-09-07T07:46:08.5370784Z 41619248f604: Pull complete 2025-09-07T07:46:09.0251222Z be86f8c4f654: Pull complete 2025-09-07T07:46:09.9744447Z ef1340e22a4b: Pull complete 2025-09-07T07:46:10.1235412Z da8d8b696333: Pull complete 2025-09-07T07:46:19.6987505Z 386b0c49c498: Pull complete 2025-09-07T07:46:20.0947002Z 2b1d0ea7efe0: Pull complete 2025-09-07T07:46:20.5676702Z 04c04be7408f: Pull complete 2025-09-07T07:46:21.0569529Z f8690caa3ac5: Pull complete 2025-09-07T07:46:21.3854036Z 2908d6baaa6b: Pull complete 2025-09-07T07:46:21.7989390Z 37e2336101eb: Pull complete 2025-09-07T07:46:22.7420062Z f1ac881fde33: Pull complete 2025-09-07T07:46:25.4956329Z 43b14c67347e: Pull complete 2025-09-07T07:46:26.0588159Z Digest: sha256:383efb45082f20b8c808cb0ba4df693a01359592233f641f1f486911ac320a9a 2025-09-07T07:46:26.1412578Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:46:26.1568387Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:46:26.1650511Z ##[group]Run echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-07T07:46:26.1651096Z echo "IN_CONTAINER_RUNNER=$(if [ -f /.inarc ] || [ -f /.incontainer ]; then echo true ; else echo false; fi)" >> "$GITHUB_OUTPUT" 2025-09-07T07:46:26.1659305Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:26.1659554Z env: 2025-09-07T07:46:26.1659720Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:26.1659903Z ##[endgroup] 2025-09-07T07:46:26.1734345Z Prepare all required actions 2025-09-07T07:46:26.1770899Z ##[group]Run ./.github/actions/get-workflow-job-id 2025-09-07T07:46:26.1771146Z with: 2025-09-07T07:46:26.1771731Z github-token: *** 2025-09-07T07:46:26.1771896Z env: 2025-09-07T07:46:26.1772066Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:26.1772257Z ##[endgroup] 2025-09-07T07:46:26.1840950Z ##[group]Run set -eux 2025-09-07T07:46:26.1841150Z set -eux 2025-09-07T07:46:26.1841456Z python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-09-07T07:46:26.1846452Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:26.1846710Z env: 2025-09-07T07:46:26.1846949Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:26.1847325Z GITHUB_TOKEN: *** 2025-09-07T07:46:26.1847503Z ##[endgroup] 2025-09-07T07:46:26.1872113Z + python3 .github/scripts/get_workflow_job_id.py 17525270809 i-06b49f47ba3e131d7 2025-09-07T07:46:26.6014768Z Setting output job-id=49775559413 2025-09-07T07:46:26.6015666Z Setting output job-name=nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:46:26.6123953Z ##[group]Run python3 -m pip install psutil==5.9.8 dataclasses_json==0.6.7 nvidia-ml-py==11.525.84 2025-09-07T07:46:26.6124414Z python3 -m pip install psutil==5.9.8 dataclasses_json==0.6.7 nvidia-ml-py==11.525.84 2025-09-07T07:46:26.6124971Z python3 -m tools.stats.monitor --log-interval "$MONITOR_LOG_INTERVAL" --data-collect-interval "$MONITOR_DATA_COLLECT_INTERVAL" > usage_log.txt 2>&1 & 2025-09-07T07:46:26.6125467Z echo "monitor-script-pid=${!}" >> "${GITHUB_OUTPUT}" 2025-09-07T07:46:26.6130132Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:26.6130370Z env: 2025-09-07T07:46:26.6130531Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:26.6130713Z JOB_ID: 49775559413 2025-09-07T07:46:26.6131088Z JOB_NAME: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:46:26.6131510Z WORKFLOW_NAME: inductor-nightly 2025-09-07T07:46:26.6131741Z WORKFLOW_RUN_ID: 17525270809 2025-09-07T07:46:26.6131921Z MONITOR_LOG_INTERVAL: 5 2025-09-07T07:46:26.6132106Z MONITOR_DATA_COLLECT_INTERVAL: 1 2025-09-07T07:46:26.6132299Z ##[endgroup] 2025-09-07T07:46:27.2859228Z Defaulting to user installation because normal site-packages is not writeable 2025-09-07T07:46:27.6087869Z Collecting psutil==5.9.8 2025-09-07T07:46:27.6402658Z Downloading psutil-5.9.8-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (288 kB) 2025-09-07T07:46:27.7199202Z Collecting dataclasses_json==0.6.7 2025-09-07T07:46:27.7285469Z Downloading dataclasses_json-0.6.7-py3-none-any.whl (28 kB) 2025-09-07T07:46:27.7602325Z Collecting nvidia-ml-py==11.525.84 2025-09-07T07:46:27.7693560Z Downloading nvidia_ml_py-11.525.84-py3-none-any.whl (34 kB) 2025-09-07T07:46:27.8598376Z Collecting marshmallow<4.0.0,>=3.18.0 2025-09-07T07:46:27.8683558Z Downloading marshmallow-3.26.1-py3-none-any.whl (50 kB) 2025-09-07T07:46:27.8968638Z Collecting typing-inspect<1,>=0.4.0 2025-09-07T07:46:27.9055502Z Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB) 2025-09-07T07:46:27.9552426Z Collecting packaging>=17.0 2025-09-07T07:46:27.9637556Z Downloading packaging-25.0-py3-none-any.whl (66 kB) 2025-09-07T07:46:28.0108466Z Collecting typing-extensions>=3.7.4 2025-09-07T07:46:28.0193988Z Downloading typing_extensions-4.15.0-py3-none-any.whl (44 kB) 2025-09-07T07:46:28.0459815Z Collecting mypy-extensions>=0.3.0 2025-09-07T07:46:28.0544274Z Downloading mypy_extensions-1.1.0-py3-none-any.whl (5.0 kB) 2025-09-07T07:46:28.1314731Z Installing collected packages: typing-extensions, packaging, mypy-extensions, typing-inspect, marshmallow, psutil, nvidia-ml-py, dataclasses-json 2025-09-07T07:46:28.3764435Z Successfully installed dataclasses-json-0.6.7 marshmallow-3.26.1 mypy-extensions-1.1.0 nvidia-ml-py-11.525.84 packaging-25.0 psutil-5.9.8 typing-extensions-4.15.0 typing-inspect-0.9.0 2025-09-07T07:46:28.5193230Z Prepare all required actions 2025-09-07T07:46:28.5193545Z Getting action download info 2025-09-07T07:46:28.6621182Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:1da556a7aa0a088e3153970611f6c432d58e80e6) 2025-09-07T07:46:28.9744808Z Download action repository 'actions/download-artifact@v4' (SHA:d3f86a106a0bac45b974a628896c90dbdf5c8093) 2025-09-07T07:46:29.2493184Z ##[group]Run ./.github/actions/download-build-artifacts 2025-09-07T07:46:29.2493449Z with: 2025-09-07T07:46:29.2493641Z name: linux-jammy-py3.9-gcc11-build 2025-09-07T07:46:29.2493860Z s3-bucket: gha-artifacts 2025-09-07T07:46:29.2494054Z env: 2025-09-07T07:46:29.2494219Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:29.2494409Z ##[endgroup] 2025-09-07T07:46:29.2513940Z ##[group]Run seemethere/download-artifact-s3@v4 2025-09-07T07:46:29.2514177Z with: 2025-09-07T07:46:29.2514357Z name: linux-jammy-py3.9-gcc11-build 2025-09-07T07:46:29.2514694Z s3-bucket: gha-artifacts 2025-09-07T07:46:29.2514920Z region: us-east-1 2025-09-07T07:46:29.2515081Z env: 2025-09-07T07:46:29.2515248Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:29.2515424Z ##[endgroup] 2025-09-07T07:46:29.6485372Z (node:48477) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-09-07T07:46:29.6485740Z 2025-09-07T07:46:29.6485887Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-09-07T07:46:29.6486282Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-09-07T07:46:29.6486715Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-09-07T07:46:29.9751585Z Found 1 objects with prefix pytorch/pytorch/17525270809/linux-jammy-py3.9-gcc11-build/ 2025-09-07T07:46:29.9752286Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-09-07T07:46:36.3133675Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2025-09-07T07:46:36.3139546Z Artifact download has finished successfully 2025-09-07T07:46:36.3334590Z ##[group]Run unzip -o artifacts.zip 2025-09-07T07:46:36.3334841Z unzip -o artifacts.zip 2025-09-07T07:46:36.3339424Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:36.3339666Z env: 2025-09-07T07:46:36.3339826Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:36.3340014Z ##[endgroup] 2025-09-07T07:46:36.3412871Z Archive: artifacts.zip 2025-09-07T07:46:36.3415873Z creating: dist/ 2025-09-07T07:46:37.4012786Z inflating: dist/torch-2.9.0a0+git93fb23d-cp39-cp39-linux_x86_64.whl 2025-09-07T07:46:37.4013136Z creating: dist/vision/ 2025-09-07T07:46:37.4087498Z inflating: dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-09-07T07:46:37.4088060Z creating: dist/audio/ 2025-09-07T07:46:37.4122655Z inflating: dist/audio/torchaudio-2.8.0a0+2e30055-cp39-cp39-linux_x86_64.whl 2025-09-07T07:46:37.4123145Z creating: dist/ao/ 2025-09-07T07:46:37.4154816Z inflating: dist/ao/torchao-0.7.0+git51c87b6e-py3-none-any.whl 2025-09-07T07:46:37.4264534Z inflating: dist/.ninja_log 2025-09-07T07:46:37.4264990Z creating: build/custom_test_artifacts/ 2025-09-07T07:46:37.4265405Z creating: build/custom_test_artifacts/custom-op-build/ 2025-09-07T07:46:37.4265871Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2025-09-07T07:46:37.4266252Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/pkgRedirects/ 2025-09-07T07:46:37.4266675Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeConfigureLog.yaml 2025-09-07T07:46:37.4267088Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/ 2025-09-07T07:46:37.4267460Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-09-07T07:46:37.4267873Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-09-07T07:46:37.4268679Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-09-07T07:46:37.4269167Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-09-07T07:46:37.4269934Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-09-07T07:46:37.4270502Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-09-07T07:46:37.4270931Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-09-07T07:46:37.4271337Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-09-07T07:46:37.4276007Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-09-07T07:46:37.4276541Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-09-07T07:46:37.4277007Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-09-07T07:46:37.4277709Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-09-07T07:46:37.4278221Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-09-07T07:46:37.4278664Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeScratch/ 2025-09-07T07:46:37.4279056Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2025-09-07T07:46:37.4279442Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2025-09-07T07:46:37.4279852Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.ts 2025-09-07T07:46:37.4280323Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.make 2025-09-07T07:46:37.4280781Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2025-09-07T07:46:37.4281215Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2025-09-07T07:46:37.4281687Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2025-09-07T07:46:37.4282128Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2025-09-07T07:46:37.4282582Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2025-09-07T07:46:37.4283038Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2025-09-07T07:46:37.4283472Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2025-09-07T07:46:37.4300669Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o.d 2025-09-07T07:46:37.4470201Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2025-09-07T07:46:37.4470747Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2025-09-07T07:46:37.4471232Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.ts 2025-09-07T07:46:37.4471757Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.make 2025-09-07T07:46:37.4472246Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2025-09-07T07:46:37.4472680Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2025-09-07T07:46:37.4473123Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2025-09-07T07:46:37.4473575Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2025-09-07T07:46:37.4474278Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2025-09-07T07:46:37.4474740Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2025-09-07T07:46:37.4475203Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2025-09-07T07:46:37.4490440Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o.d 2025-09-07T07:46:37.4559091Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2025-09-07T07:46:37.4562105Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-09-07T07:46:37.4562847Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2025-09-07T07:46:37.4563469Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2025-09-07T07:46:37.4564189Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2025-09-07T07:46:37.4564586Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2025-09-07T07:46:37.4565024Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/InstallScripts.json 2025-09-07T07:46:37.4565453Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2025-09-07T07:46:37.4565825Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2025-09-07T07:46:37.4566196Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2025-09-07T07:46:37.4711374Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2025-09-07T07:46:37.4761307Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2025-09-07T07:46:37.4761880Z creating: build/custom_test_artifacts/jit-hook-build/ 2025-09-07T07:46:37.4762348Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2025-09-07T07:46:37.4763254Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/pkgRedirects/ 2025-09-07T07:46:37.4763727Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeConfigureLog.yaml 2025-09-07T07:46:37.4764133Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/ 2025-09-07T07:46:37.4764533Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-09-07T07:46:37.4764958Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-09-07T07:46:37.4765372Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-09-07T07:46:37.4765845Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-09-07T07:46:37.4766417Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-09-07T07:46:37.4767135Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-09-07T07:46:37.4767612Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-09-07T07:46:37.4768088Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-09-07T07:46:37.4768756Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-09-07T07:46:37.4769334Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-09-07T07:46:37.4772436Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-09-07T07:46:37.4773139Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-09-07T07:46:37.4773806Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-09-07T07:46:37.4774275Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeScratch/ 2025-09-07T07:46:37.4779633Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2025-09-07T07:46:37.4784150Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2025-09-07T07:46:37.4787955Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.ts 2025-09-07T07:46:37.4790165Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.make 2025-09-07T07:46:37.4793898Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2025-09-07T07:46:37.4799112Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2025-09-07T07:46:37.4799818Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2025-09-07T07:46:37.4800546Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2025-09-07T07:46:37.4805479Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2025-09-07T07:46:37.4806140Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2025-09-07T07:46:37.4806661Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2025-09-07T07:46:37.4807459Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o.d 2025-09-07T07:46:37.4852877Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2025-09-07T07:46:37.4853682Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-09-07T07:46:37.4854388Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2025-09-07T07:46:37.4855438Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2025-09-07T07:46:37.4855898Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2025-09-07T07:46:37.4856304Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2025-09-07T07:46:37.4856745Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/InstallScripts.json 2025-09-07T07:46:37.4857172Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2025-09-07T07:46:37.4857536Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2025-09-07T07:46:37.4857896Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2025-09-07T07:46:37.4890720Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2025-09-07T07:46:37.4891276Z creating: build/custom_test_artifacts/custom-backend-build/ 2025-09-07T07:46:37.4891764Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2025-09-07T07:46:37.4892565Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/pkgRedirects/ 2025-09-07T07:46:37.4893177Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeConfigureLog.yaml 2025-09-07T07:46:37.4893666Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/ 2025-09-07T07:46:37.4894178Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeSystem.cmake 2025-09-07T07:46:37.4894678Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/ 2025-09-07T07:46:37.4895208Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/tmp/ 2025-09-07T07:46:37.4895991Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/CMakeCCompilerId.c 2025-09-07T07:46:37.4896612Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdC/a.out 2025-09-07T07:46:37.4897399Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeCCompiler.cmake 2025-09-07T07:46:37.4897955Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/ 2025-09-07T07:46:37.4898485Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/tmp/ 2025-09-07T07:46:37.4899836Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/CMakeCXXCompilerId.cpp 2025-09-07T07:46:37.4900456Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CompilerIdCXX/a.out 2025-09-07T07:46:37.4901044Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeCXXCompiler.cmake 2025-09-07T07:46:37.4902223Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_C.bin 2025-09-07T07:46:37.4905749Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/4.0.0/CMakeDetermineCompilerABI_CXX.bin 2025-09-07T07:46:37.4906516Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeScratch/ 2025-09-07T07:46:37.4906939Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2025-09-07T07:46:37.4907360Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2025-09-07T07:46:37.4907825Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.ts 2025-09-07T07:46:37.4908348Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.make 2025-09-07T07:46:37.4908844Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2025-09-07T07:46:37.4909331Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2025-09-07T07:46:37.4909852Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2025-09-07T07:46:37.4910347Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2025-09-07T07:46:37.4910834Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2025-09-07T07:46:37.4911324Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2025-09-07T07:46:37.4911793Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2025-09-07T07:46:37.4912308Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o.d 2025-09-07T07:46:37.5017492Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2025-09-07T07:46:37.5022380Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2025-09-07T07:46:37.5028501Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.ts 2025-09-07T07:46:37.5029155Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.make 2025-09-07T07:46:37.5029729Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2025-09-07T07:46:37.5030290Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2025-09-07T07:46:37.5030859Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2025-09-07T07:46:37.5031384Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2025-09-07T07:46:37.5031916Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2025-09-07T07:46:37.5032655Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2025-09-07T07:46:37.5033197Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2025-09-07T07:46:37.5037385Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o.d 2025-09-07T07:46:37.5084541Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2025-09-07T07:46:37.5085303Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2025-09-07T07:46:37.5085946Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2025-09-07T07:46:37.5086394Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2025-09-07T07:46:37.5087044Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2025-09-07T07:46:37.5087690Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2025-09-07T07:46:37.5088118Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/InstallScripts.json 2025-09-07T07:46:37.5088547Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2025-09-07T07:46:37.5088905Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2025-09-07T07:46:37.5089273Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2025-09-07T07:46:37.5179502Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2025-09-07T07:46:37.5213473Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2025-09-07T07:46:37.5214037Z creating: build/lib/ 2025-09-07T07:46:37.5283261Z inflating: build/lib/libprotobuf-lite.a 2025-09-07T07:46:37.5689595Z inflating: build/lib/libprotobuf.a 2025-09-07T07:46:37.6140201Z inflating: build/lib/libprotoc.a 2025-09-07T07:46:37.6151102Z inflating: build/lib/libpthreadpool.a 2025-09-07T07:46:37.6155945Z inflating: build/lib/libcpuinfo.a 2025-09-07T07:46:37.6163337Z inflating: build/lib/libcpuinfo_internals.a 2025-09-07T07:46:37.6163630Z inflating: build/lib/libclog.a 2025-09-07T07:46:37.6178609Z inflating: build/lib/libpytorch_qnnpack.a 2025-09-07T07:46:37.6184232Z inflating: build/lib/libnnpack_reference_layers.a 2025-09-07T07:46:37.6350092Z inflating: build/lib/libmicrokernels-prod.a 2025-09-07T07:46:37.6363904Z inflating: build/lib/libnnpack.a 2025-09-07T07:46:37.7148932Z inflating: build/lib/libmicrokernels-all.a 2025-09-07T07:46:37.7212117Z inflating: build/lib/libgtest.a 2025-09-07T07:46:37.7225154Z inflating: build/lib/libgmock.a 2025-09-07T07:46:37.7225482Z inflating: build/lib/libgtest_main.a 2025-09-07T07:46:37.7307312Z inflating: build/lib/libXNNPACK.a 2025-09-07T07:46:37.7307592Z inflating: build/lib/libgmock_main.a 2025-09-07T07:46:37.7376217Z inflating: build/lib/libbenchmark.a 2025-09-07T07:46:37.7380335Z inflating: build/lib/libbenchmark_main.a 2025-09-07T07:46:37.7380632Z inflating: build/lib/libjitprofiling.a 2025-09-07T07:46:37.7380879Z inflating: build/lib/libittnotify.a 2025-09-07T07:46:37.7442344Z inflating: build/lib/libasmjit.a 2025-09-07T07:46:37.8465709Z inflating: build/lib/libfbgemm.a 2025-09-07T07:46:37.8490249Z inflating: build/lib/libtensorpipe_uv.a 2025-09-07T07:46:37.8976039Z inflating: build/lib/libtensorpipe.a 2025-09-07T07:46:37.9081752Z inflating: build/lib/libgloo.a 2025-09-07T07:46:37.9124763Z inflating: build/lib/libonnx_proto.a 2025-09-07T07:46:37.9758395Z inflating: build/lib/libonnx.a 2025-09-07T07:46:38.8843690Z inflating: build/lib/libdnnl.a 2025-09-07T07:46:38.8863463Z inflating: build/lib/libfmt.a 2025-09-07T07:46:38.9101997Z inflating: build/lib/libkineto.a 2025-09-07T07:46:38.9205036Z inflating: build/lib/libc10.so 2025-09-07T07:46:38.9205822Z inflating: build/lib/libtorch_global_deps.so 2025-09-07T07:46:41.5919712Z inflating: build/lib/libtorch_cpu.so 2025-09-07T07:46:41.5920022Z inflating: build/lib/libtorch.so 2025-09-07T07:46:41.5983870Z inflating: build/lib/libtorchbind_test.so 2025-09-07T07:46:41.5997824Z inflating: build/lib/libjitbackend_test.so 2025-09-07T07:46:41.6019372Z inflating: build/lib/libbackend_with_compiler.so 2025-09-07T07:46:41.6042769Z inflating: build/lib/libaoti_custom_ops.so 2025-09-07T07:46:41.6045681Z inflating: build/lib/libshm.so 2025-09-07T07:46:41.7888283Z inflating: build/lib/libtorch_python.so 2025-09-07T07:46:41.7920059Z inflating: build/lib/libnnapi_backend.so 2025-09-07T07:46:41.7920487Z creating: build/bin/ 2025-09-07T07:46:41.7920792Z creating: build/bin/CMakeFiles/ 2025-09-07T07:46:41.7921132Z inflating: build/bin/cmake_install.cmake 2025-09-07T07:46:41.7921923Z inflating: build/bin/CTestTestfile.cmake 2025-09-07T07:46:41.8331812Z inflating: build/bin/protoc-3.13.0.0 2025-09-07T07:46:41.8742311Z inflating: build/bin/protoc 2025-09-07T07:46:41.8793708Z inflating: build/bin/c10_AllocatorConfig_test 2025-09-07T07:46:41.8844426Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2025-09-07T07:46:41.8901511Z inflating: build/bin/c10_DeviceGuard_test 2025-09-07T07:46:41.8950539Z inflating: build/bin/c10_Device_test 2025-09-07T07:46:41.9005968Z inflating: build/bin/c10_DispatchKeySet_test 2025-09-07T07:46:41.9056828Z inflating: build/bin/c10_StreamGuard_test 2025-09-07T07:46:41.9109833Z inflating: build/bin/c10_Scalar_test 2025-09-07T07:46:41.9165629Z inflating: build/bin/c10_SymInt_test 2025-09-07T07:46:41.9220645Z inflating: build/bin/c10_InlineDeviceGuard_test 2025-09-07T07:46:41.9277978Z inflating: build/bin/c10_InlineStreamGuard_test 2025-09-07T07:46:41.9330576Z inflating: build/bin/c10_SizesAndStrides_test 2025-09-07T07:46:41.9400924Z inflating: build/bin/c10_cow_test 2025-09-07T07:46:41.9453020Z inflating: build/bin/c10_ArrayRef_test 2025-09-07T07:46:41.9499374Z inflating: build/bin/c10_ConstexprCrc_test 2025-09-07T07:46:41.9555682Z inflating: build/bin/c10_Bitset_test 2025-09-07T07:46:41.9603630Z inflating: build/bin/c10_DeadlockDetection_test 2025-09-07T07:46:41.9660897Z inflating: build/bin/c10_Enumerate_test 2025-09-07T07:46:41.9717765Z inflating: build/bin/c10_LeftRight_test 2025-09-07T07:46:41.9767336Z inflating: build/bin/c10_IntrusiveList_test 2025-09-07T07:46:41.9823867Z inflating: build/bin/c10_Metaprogramming_test 2025-09-07T07:46:41.9873577Z inflating: build/bin/c10_Half_test 2025-09-07T07:46:41.9925423Z inflating: build/bin/c10_NetworkFlow_test 2025-09-07T07:46:41.9977175Z inflating: build/bin/c10_Semaphore_test 2025-09-07T07:46:42.0027345Z inflating: build/bin/c10_Synchronized_test 2025-09-07T07:46:42.0079587Z inflating: build/bin/c10_TypeList_test 2025-09-07T07:46:42.0135439Z inflating: build/bin/c10_ThreadLocal_test 2025-09-07T07:46:42.0185874Z inflating: build/bin/c10_TypeIndex_test 2025-09-07T07:46:42.0235308Z inflating: build/bin/c10_TypeTraits_test 2025-09-07T07:46:42.0288814Z inflating: build/bin/c10_accumulate_test 2025-09-07T07:46:42.0345453Z inflating: build/bin/c10_bfloat16_test 2025-09-07T07:46:42.0393844Z inflating: build/bin/c10_bit_cast_test 2025-09-07T07:46:42.0450577Z inflating: build/bin/c10_complex_test 2025-09-07T07:46:42.0505694Z inflating: build/bin/c10_complex_math_test 2025-09-07T07:46:42.0554018Z inflating: build/bin/c10_error_test 2025-09-07T07:46:42.0604983Z inflating: build/bin/c10_exception_test 2025-09-07T07:46:42.0655665Z inflating: build/bin/c10_flags_test 2025-09-07T07:46:42.0705679Z inflating: build/bin/c10_irange_test 2025-09-07T07:46:42.0860785Z inflating: build/bin/c10_intrusive_ptr_test 2025-09-07T07:46:42.0907261Z inflating: build/bin/c10_generic_math_test 2025-09-07T07:46:42.0961885Z inflating: build/bin/c10_lazy_test 2025-09-07T07:46:42.1021421Z inflating: build/bin/c10_logging_test 2025-09-07T07:46:42.1091990Z inflating: build/bin/c10_optional_test 2025-09-07T07:46:42.1146404Z inflating: build/bin/c10_registry_test 2025-09-07T07:46:42.1204933Z inflating: build/bin/c10_ordered_preserving_dict_test 2025-09-07T07:46:42.1258553Z inflating: build/bin/c10_ssize_test 2025-09-07T07:46:42.1400541Z inflating: build/bin/c10_small_vector_test 2025-09-07T07:46:42.1458339Z inflating: build/bin/c10_string_util_test 2025-09-07T07:46:42.1501672Z inflating: build/bin/c10_intrusive_ptr_benchmark 2025-09-07T07:46:42.1553909Z inflating: build/bin/c10_tempfile_test 2025-09-07T07:46:42.1601432Z inflating: build/bin/c10_string_view_test 2025-09-07T07:46:42.1658294Z inflating: build/bin/c10_typeid_test 2025-09-07T07:46:42.2191721Z inflating: build/bin/vec_test_all_types_DEFAULT 2025-09-07T07:46:42.2756123Z inflating: build/bin/vec_test_all_types_AVX512 2025-09-07T07:46:42.3320954Z inflating: build/bin/vec_test_all_types_AVX2 2025-09-07T07:46:42.3376100Z inflating: build/bin/static_runtime_bench 2025-09-07T07:46:42.3613768Z inflating: build/bin/static_runtime_test 2025-09-07T07:46:42.3692458Z inflating: build/bin/Dict_test 2025-09-07T07:46:42.3747893Z inflating: build/bin/Dimname_test 2025-09-07T07:46:42.3811561Z inflating: build/bin/MaybeOwned_test 2025-09-07T07:46:42.3870354Z inflating: build/bin/NamedTensor_test 2025-09-07T07:46:42.3928111Z inflating: build/bin/apply_utils_test 2025-09-07T07:46:42.3985104Z inflating: build/bin/atest 2025-09-07T07:46:42.4047571Z inflating: build/bin/basic 2025-09-07T07:46:42.4101411Z inflating: build/bin/broadcast_test 2025-09-07T07:46:42.4151715Z inflating: build/bin/cpu_allocator_test 2025-09-07T07:46:42.4211463Z inflating: build/bin/cpu_generator_test 2025-09-07T07:46:42.4261887Z inflating: build/bin/cpu_profiling_allocator_test 2025-09-07T07:46:42.4350312Z inflating: build/bin/cpu_rng_test 2025-09-07T07:46:42.4401344Z inflating: build/bin/dlconvertor_test 2025-09-07T07:46:42.4459476Z inflating: build/bin/extension_backend_test 2025-09-07T07:46:42.4516059Z inflating: build/bin/half_test 2025-09-07T07:46:42.4603559Z inflating: build/bin/ivalue_test 2025-09-07T07:46:42.4653755Z inflating: build/bin/lazy_tensor_test 2025-09-07T07:46:42.4705820Z inflating: build/bin/math_kernel_test 2025-09-07T07:46:42.4760961Z inflating: build/bin/memory_format_test 2025-09-07T07:46:42.4816863Z inflating: build/bin/memory_overlapping_test 2025-09-07T07:46:42.4874155Z inflating: build/bin/mobile_memory_cleanup 2025-09-07T07:46:42.4925592Z inflating: build/bin/native_test 2025-09-07T07:46:42.4978617Z inflating: build/bin/operator_name_test 2025-09-07T07:46:42.5028500Z inflating: build/bin/operators_test 2025-09-07T07:46:42.5080946Z inflating: build/bin/packedtensoraccessor_test 2025-09-07T07:46:42.5146651Z inflating: build/bin/pow_test 2025-09-07T07:46:42.5200880Z inflating: build/bin/quantized_test 2025-09-07T07:46:42.5255899Z inflating: build/bin/reportMemoryUsage_test 2025-09-07T07:46:42.5303089Z inflating: build/bin/reduce_ops_test 2025-09-07T07:46:42.5359859Z inflating: build/bin/scalar_tensor_test 2025-09-07T07:46:42.5415385Z inflating: build/bin/scalar_test 2025-09-07T07:46:42.5468016Z inflating: build/bin/StorageUtils_test 2025-09-07T07:46:42.5517248Z inflating: build/bin/stride_properties_test 2025-09-07T07:46:42.5590838Z inflating: build/bin/tensor_iterator_test 2025-09-07T07:46:42.5644932Z inflating: build/bin/test_parallel 2025-09-07T07:46:42.5696417Z inflating: build/bin/thread_init_test 2025-09-07T07:46:42.5751248Z inflating: build/bin/type_ptr_test 2025-09-07T07:46:42.5807480Z inflating: build/bin/type_test 2025-09-07T07:46:42.5862235Z inflating: build/bin/undefined_tensor_test 2025-09-07T07:46:42.5910206Z inflating: build/bin/verify_api_visibility 2025-09-07T07:46:42.5982425Z inflating: build/bin/legacy_vmap_test 2025-09-07T07:46:42.6030223Z inflating: build/bin/weakref_test 2025-09-07T07:46:42.6083952Z inflating: build/bin/wrapdim_test 2025-09-07T07:46:42.6137402Z inflating: build/bin/xla_tensor_test 2025-09-07T07:46:42.6199511Z inflating: build/bin/IListRef_test 2025-09-07T07:46:42.6301900Z inflating: build/bin/List_test 2025-09-07T07:46:42.6372165Z inflating: build/bin/KernelFunction_test 2025-09-07T07:46:42.6494888Z inflating: build/bin/kernel_function_legacy_test 2025-09-07T07:46:42.6587097Z inflating: build/bin/kernel_function_test 2025-09-07T07:46:42.6708057Z inflating: build/bin/kernel_lambda_legacy_test 2025-09-07T07:46:42.6803957Z inflating: build/bin/kernel_lambda_test 2025-09-07T07:46:42.6863027Z inflating: build/bin/kernel_stackbased_test 2025-09-07T07:46:42.6954044Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2025-09-07T07:46:42.7005691Z inflating: build/bin/CppSignature_test 2025-09-07T07:46:42.7060195Z inflating: build/bin/backend_fallback_test 2025-09-07T07:46:42.7108303Z inflating: build/bin/op_allowlist_test 2025-09-07T07:46:42.7393139Z inflating: build/bin/op_registration_test 2025-09-07T07:46:42.7454552Z inflating: build/bin/inline_container_test 2025-09-07T07:46:42.8454652Z inflating: build/bin/test_jit 2025-09-07T07:46:42.8507891Z inflating: build/bin/FileStoreTest 2025-09-07T07:46:42.8561729Z inflating: build/bin/BackoffTest 2025-09-07T07:46:42.8908574Z inflating: build/bin/test_nativert 2025-09-07T07:46:42.8963798Z inflating: build/bin/TCPStoreTest 2025-09-07T07:46:42.9021831Z inflating: build/bin/HashStoreTest 2025-09-07T07:46:42.9083178Z inflating: build/bin/ProcessGroupGlooTest 2025-09-07T07:46:42.9085246Z inflating: build/bin/example_allreduce 2025-09-07T07:46:42.9142980Z inflating: build/bin/test_dist_autograd 2025-09-07T07:46:42.9209594Z inflating: build/bin/test_cpp_rpc 2025-09-07T07:46:43.0243852Z inflating: build/bin/test_api 2025-09-07T07:46:43.0244704Z inflating: build/bin/parallel_benchmark 2025-09-07T07:46:43.0559797Z inflating: build/bin/test_lazy 2025-09-07T07:46:43.0562426Z inflating: build/bin/torch_shm_manager 2025-09-07T07:46:43.0562673Z creating: .additional_ci_files/ 2025-09-07T07:46:43.0644491Z inflating: .additional_ci_files/test-times.json 2025-09-07T07:46:43.0954439Z inflating: .additional_ci_files/test-class-times.json 2025-09-07T07:46:43.0976002Z ##[group]Run rm artifacts.zip 2025-09-07T07:46:43.0976228Z rm artifacts.zip 2025-09-07T07:46:43.0980697Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:43.0980945Z env: 2025-09-07T07:46:43.0981106Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.0981289Z ##[endgroup] 2025-09-07T07:46:43.1252754Z ##[group]Run df -H 2025-09-07T07:46:43.1252965Z df -H 2025-09-07T07:46:43.1257681Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:43.1257936Z env: 2025-09-07T07:46:43.1258108Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.1258285Z ##[endgroup] 2025-09-07T07:46:43.1299813Z Filesystem Size Used Avail Use% Mounted on 2025-09-07T07:46:43.1300336Z devtmpfs 4.2M 0 4.2M 0% /dev 2025-09-07T07:46:43.1303896Z tmpfs 67G 0 67G 0% /dev/shm 2025-09-07T07:46:43.1304276Z tmpfs 27G 791k 27G 1% /run 2025-09-07T07:46:43.1304515Z /dev/nvme0n1p1 215G 70G 145G 33% / 2025-09-07T07:46:43.1304726Z tmpfs 67G 13k 67G 1% /tmp 2025-09-07T07:46:43.1305065Z /dev/nvme0n1p128 11M 1.4M 9.2M 13% /boot/efi 2025-09-07T07:46:43.1328260Z Prepare all required actions 2025-09-07T07:46:43.1329507Z Getting action download info 2025-09-07T07:46:43.2750273Z ##[group]Run ./.github/actions/download-td-artifacts 2025-09-07T07:46:43.2750542Z with: 2025-09-07T07:46:43.2750706Z env: 2025-09-07T07:46:43.2750870Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.2751056Z ##[endgroup] 2025-09-07T07:46:43.2775695Z ##[group]Run seemethere/download-artifact-s3@v4 2025-09-07T07:46:43.2775932Z with: 2025-09-07T07:46:43.2776083Z name: td_results 2025-09-07T07:46:43.2776260Z s3-bucket: gha-artifacts 2025-09-07T07:46:43.2776456Z region: us-east-1 2025-09-07T07:46:43.2776607Z env: 2025-09-07T07:46:43.2776761Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.2776936Z ##[endgroup] 2025-09-07T07:46:43.6093849Z (node:48498) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2025-09-07T07:46:43.6098270Z 2025-09-07T07:46:43.6098709Z Please migrate your code to use AWS SDK for JavaScript (v3). 2025-09-07T07:46:43.6099163Z For more information, check the migration guide at https://a.co/7PzMCcy 2025-09-07T07:46:43.6099571Z (Use `node --trace-warnings ...` to show where the warning was created) 2025-09-07T07:46:43.6968355Z Found 0 objects with prefix pytorch/pytorch/17525270809/td_results/ 2025-09-07T07:46:43.6971525Z Artifact download has finished successfully 2025-09-07T07:46:43.7138745Z ##[group]Run mkdir -p .additional_ci_files 2025-09-07T07:46:43.7139005Z mkdir -p .additional_ci_files 2025-09-07T07:46:43.7139285Z mv td_results.json .additional_ci_files/td_results.json || true 2025-09-07T07:46:43.7143744Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:43.7143982Z env: 2025-09-07T07:46:43.7144145Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.7144330Z ##[endgroup] 2025-09-07T07:46:43.7192223Z mv: cannot stat 'td_results.json': No such file or directory 2025-09-07T07:46:43.7215332Z ##[group]Run .github/scripts/parse_ref.py 2025-09-07T07:46:43.7215585Z .github/scripts/parse_ref.py 2025-09-07T07:46:43.7219196Z shell: /usr/bin/bash -e {0} 2025-09-07T07:46:43.7219380Z env: 2025-09-07T07:46:43.7219536Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.7219707Z ##[endgroup] 2025-09-07T07:46:43.7406112Z Setting output branch=main 2025-09-07T07:46:43.7492748Z Prepare all required actions 2025-09-07T07:46:43.7493079Z Getting action download info 2025-09-07T07:46:43.8656883Z ##[group]Run ./.github/actions/filter-test-configs 2025-09-07T07:46:43.8657150Z with: 2025-09-07T07:46:43.8657603Z github-token: *** 2025-09-07T07:46:43.8658856Z test-matrix: {"include": [{"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]} 2025-09-07T07:46:43.8660354Z job-name: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:46:43.8660761Z env: 2025-09-07T07:46:43.8660905Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.8661082Z ##[endgroup] 2025-09-07T07:46:43.8684369Z ##[group]Run nick-fields/retry@v3.0.0 2025-09-07T07:46:43.8684630Z with: 2025-09-07T07:46:43.8684803Z shell: bash 2025-09-07T07:46:43.8684997Z timeout_minutes: 10 2025-09-07T07:46:43.8685202Z max_attempts: 5 2025-09-07T07:46:43.8685406Z retry_wait_seconds: 30 2025-09-07T07:46:43.8686004Z command: set -eux # PyYAML 6.0 doesn't work with MacOS x86 anymore # This must run on Python-3.7 (AmazonLinux2) so can't use request=3.32.2 python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-09-07T07:46:43.8686732Z polling_interval_seconds: 1 2025-09-07T07:46:43.8687141Z warning_on_retry: true 2025-09-07T07:46:43.8687382Z continue_on_error: false 2025-09-07T07:46:43.8687619Z env: 2025-09-07T07:46:43.8687823Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:43.8688252Z GITHUB_TOKEN: *** 2025-09-07T07:46:43.8688480Z ##[endgroup] 2025-09-07T07:46:43.9549170Z + python3 -m pip install requests==2.27.1 pyyaml==6.0.2 2025-09-07T07:46:44.1292208Z Defaulting to user installation because normal site-packages is not writeable 2025-09-07T07:46:44.2325109Z Collecting requests==2.27.1 2025-09-07T07:46:44.2465830Z Downloading requests-2.27.1-py2.py3-none-any.whl (63 kB) 2025-09-07T07:46:44.3700510Z Collecting pyyaml==6.0.2 2025-09-07T07:46:44.3728616Z Downloading PyYAML-6.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (737 kB) 2025-09-07T07:46:44.3928326Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (1.25.10) 2025-09-07T07:46:44.6418566Z Collecting charset-normalizer~=2.0.0 2025-09-07T07:46:44.6447540Z Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB) 2025-09-07T07:46:44.6618362Z Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (2.10) 2025-09-07T07:46:44.7057695Z Collecting certifi>=2017.4.17 2025-09-07T07:46:44.7090300Z Downloading certifi-2025.8.3-py3-none-any.whl (161 kB) 2025-09-07T07:46:44.7803538Z Installing collected packages: charset-normalizer, certifi, requests, pyyaml 2025-09-07T07:46:45.0620796Z Successfully installed certifi-2025.8.3 charset-normalizer-2.0.12 pyyaml-6.0.2 requests-2.27.1 2025-09-07T07:46:45.9322523Z Command completed after 1 attempt(s). 2025-09-07T07:46:45.9385840Z ##[group]Run set -x 2025-09-07T07:46:45.9386051Z set -x 2025-09-07T07:46:45.9386215Z  2025-09-07T07:46:45.9386478Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-09-07T07:46:45.9386771Z # in runner workspace 2025-09-07T07:46:45.9387040Z python3 "${GITHUB_ACTION_PATH}/../../scripts/parse_ref.py" 2025-09-07T07:46:45.9392418Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:45.9392814Z env: 2025-09-07T07:46:45.9392970Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:45.9393166Z ##[endgroup] 2025-09-07T07:46:45.9415756Z + python3 /home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/filter-test-configs/../../scripts/parse_ref.py 2025-09-07T07:46:45.9563569Z Setting output branch=main 2025-09-07T07:46:45.9634733Z ##[group]Run echo "Workflow: ${GITHUB_WORKFLOW}" 2025-09-07T07:46:45.9635032Z echo "Workflow: ${GITHUB_WORKFLOW}" 2025-09-07T07:46:45.9635257Z echo "Job name: ${JOB_NAME}" 2025-09-07T07:46:45.9635448Z  2025-09-07T07:46:45.9635698Z # Use relative path here as this could be checked out anywhere, not necessarily 2025-09-07T07:46:45.9635995Z # in runner workspace 2025-09-07T07:46:45.9636273Z python3 "${GITHUB_ACTION_PATH}/../../scripts/filter_test_configs.py" \ 2025-09-07T07:46:45.9636594Z  --workflow "${GITHUB_WORKFLOW}" \ 2025-09-07T07:46:45.9636837Z  --job-name "${JOB_NAME}" \ 2025-09-07T07:46:45.9638157Z  --test-matrix "{"include": [{"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]}" \ 2025-09-07T07:46:45.9639515Z  --selected-test-configs "" \ 2025-09-07T07:46:45.9639731Z  --pr-number "${PR_NUMBER}" \ 2025-09-07T07:46:45.9639944Z  --tag "${TAG}" \ 2025-09-07T07:46:45.9640152Z  --event-name "${EVENT_NAME}" \ 2025-09-07T07:46:45.9640383Z  --schedule "${SCHEDULE}" \ 2025-09-07T07:46:45.9640604Z  --branch "${HEAD_BRANCH}" 2025-09-07T07:46:45.9645824Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:45.9646100Z env: 2025-09-07T07:46:45.9646283Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:45.9647065Z GITHUB_TOKEN: *** 2025-09-07T07:46:45.9647533Z JOB_NAME: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:46:45.9647993Z PR_NUMBER: 2025-09-07T07:46:45.9648167Z TAG: 2025-09-07T07:46:45.9648339Z EVENT_NAME: schedule 2025-09-07T07:46:45.9648531Z SCHEDULE: 0 7 * * * 2025-09-07T07:46:45.9648715Z HEAD_BRANCH: main 2025-09-07T07:46:45.9648904Z ##[endgroup] 2025-09-07T07:46:45.9675981Z Workflow: inductor-nightly 2025-09-07T07:46:45.9676533Z Job name: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:46:46.1175013Z Setting output keep-going=True 2025-09-07T07:46:46.1180519Z Setting output ci-verbose-test-logs=False 2025-09-07T07:46:46.1186574Z Setting output ci-test-showlocals=False 2025-09-07T07:46:46.1186857Z Setting output ci-no-test-timeout=False 2025-09-07T07:46:46.1187093Z Setting output ci-no-td=False 2025-09-07T07:46:46.1187312Z Setting output ci-td-distributed=False 2025-09-07T07:46:46.1187522Z Setting output is-unstable=False 2025-09-07T07:46:46.1187728Z Setting output reenabled-issues= 2025-09-07T07:46:46.1189337Z Setting output test-matrix={"include": [{"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]} 2025-09-07T07:46:46.1190754Z Setting output is-test-matrix-empty=False 2025-09-07T07:46:46.1289368Z ##[group]Run echo "Filtered matrix:" 2025-09-07T07:46:46.1289612Z echo "Filtered matrix:" 2025-09-07T07:46:46.1291030Z echo "{"include": [{"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface", "shard": 1, "num_shards": 1, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_timm", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 1, "num_shards": 2, "runner": "linux.8xlarge.amx"}, {"config": "dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench", "shard": 2, "num_shards": 2, "runner": "linux.8xlarge.amx"}]}" 2025-09-07T07:46:46.1292528Z  2025-09-07T07:46:46.1292699Z echo 2025-09-07T07:46:46.1292915Z echo "Is the current job unstable? False" 2025-09-07T07:46:46.1293157Z  2025-09-07T07:46:46.1293323Z echo 2025-09-07T07:46:46.1293526Z echo "Is keep-going label set? True" 2025-09-07T07:46:46.1293760Z  2025-09-07T07:46:46.1293918Z echo 2025-09-07T07:46:46.1294109Z echo "Reenabled issues? " 2025-09-07T07:46:46.1298777Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:46.1299014Z env: 2025-09-07T07:46:46.1299169Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:46.1299348Z ##[endgroup] 2025-09-07T07:46:46.1321521Z Filtered matrix: 2025-09-07T07:46:46.1322931Z {include: [{config: dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, shard: 1, num_shards: 1, runner: linux.8xlarge.amx}, {config: dynamic_cpu_max_autotune_inductor_amp_freezing_timm, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_max_autotune_inductor_amp_freezing_timm, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench, shard: 1, num_shards: 2, runner: linux.8xlarge.amx}, {config: dynamic_cpu_max_autotune_inductor_amp_freezing_torchbench, shard: 2, num_shards: 2, runner: linux.8xlarge.amx}]} 2025-09-07T07:46:46.1324178Z 2025-09-07T07:46:46.1324263Z Is the current job unstable? False 2025-09-07T07:46:46.1324406Z 2025-09-07T07:46:46.1324491Z Is keep-going label set? True 2025-09-07T07:46:46.1324745Z 2025-09-07T07:46:46.1324818Z Reenabled issues? 2025-09-07T07:46:46.1475346Z ##[group]Run echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-09-07T07:46:46.1475690Z echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2025-09-07T07:46:46.1479890Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:46.1480135Z env: 2025-09-07T07:46:46.1480305Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:46.1480594Z JOB_TIMEOUT: 720 2025-09-07T07:46:46.1480754Z ##[endgroup] 2025-09-07T07:46:46.3209835Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-09-07T07:46:46.3210219Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-09-07T07:46:46.3210510Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2025-09-07T07:46:46.3215221Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T07:46:46.3215478Z env: 2025-09-07T07:46:46.3215648Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:46.3215826Z ##[endgroup] 2025-09-07T07:46:46.3335247Z ##[group]Run set -x 2025-09-07T07:46:46.3335514Z set -x 2025-09-07T07:46:46.3335675Z  2025-09-07T07:46:46.3335868Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2025-09-07T07:46:46.3336143Z  TEST_COMMAND=.ci/pytorch/multigpu-test.sh 2025-09-07T07:46:46.3336403Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2025-09-07T07:46:46.3336637Z  TEST_COMMAND=.ci/onnx/test.sh 2025-09-07T07:46:46.3336863Z else 2025-09-07T07:46:46.3337048Z  TEST_COMMAND=.ci/pytorch/test.sh 2025-09-07T07:46:46.3337256Z fi 2025-09-07T07:46:46.3337399Z  2025-09-07T07:46:46.3337583Z # Leaving 1GB for the runner and other things 2025-09-07T07:46:46.3337946Z TOTAL_AVAILABLE_MEMORY_IN_GB=$(awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo) 2025-09-07T07:46:46.3338488Z # https://docs.docker.com/engine/containers/resource_constraints/#--memory-swap-details, the 3GB swap 2025-09-07T07:46:46.3338912Z # comes from https://github.com/pytorch/test-infra/pull/6058 2025-09-07T07:46:46.3339245Z TOTAL_MEMORY_WITH_SWAP=$(("${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}" + 3)) 2025-09-07T07:46:46.3339506Z  2025-09-07T07:46:46.3339698Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-09-07T07:46:46.3339920Z  SHM_OPTS= 2025-09-07T07:46:46.3340100Z  JENKINS_USER= 2025-09-07T07:46:46.3340343Z  # ensure that docker container cleanly exits in 12 hours 2025-09-07T07:46:46.3340849Z  # if for some reason cleanup action doesn't stop container 2025-09-07T07:46:46.3341107Z  # when job is cancelled 2025-09-07T07:46:46.3341309Z  DOCKER_SHELL_CMD="sleep 12h" 2025-09-07T07:46:46.3341510Z else 2025-09-07T07:46:46.3341692Z  SHM_OPTS="--shm-size=${SHM_SIZE}" 2025-09-07T07:46:46.3341915Z  JENKINS_USER="--user jenkins" 2025-09-07T07:46:46.3342118Z  DOCKER_SHELL_CMD= 2025-09-07T07:46:46.3342299Z fi 2025-09-07T07:46:46.3342449Z  2025-09-07T07:46:46.3342674Z # detached container should get cleaned up by teardown_ec2_linux 2025-09-07T07:46:46.3342994Z # TODO: Stop building test binaries as part of the build phase 2025-09-07T07:46:46.3343357Z # Used for GPU_FLAG, SHM_OPTS, JENKINS_USER and DOCKER_SHELL_CMD since that doesn't play nice 2025-09-07T07:46:46.3343681Z # shellcheck disable=SC2086,SC2090 2025-09-07T07:46:46.3343902Z container_name=$(docker run \ 2025-09-07T07:46:46.3344110Z  ${GPU_FLAG:-} \ 2025-09-07T07:46:46.3344310Z  ${SCCACHE_SERVER_PORT_DOCKER_FLAG:-} \ 2025-09-07T07:46:46.3344537Z  -e BUILD_ENVIRONMENT \ 2025-09-07T07:46:46.3344737Z  -e PR_NUMBER \ 2025-09-07T07:46:46.3344924Z  -e GITHUB_ACTIONS \ 2025-09-07T07:46:46.3345317Z  -e GITHUB_REPOSITORY \ 2025-09-07T07:46:46.3345527Z  -e GITHUB_WORKFLOW \ 2025-09-07T07:46:46.3345723Z  -e GITHUB_JOB \ 2025-09-07T07:46:46.3345910Z  -e GITHUB_RUN_ID \ 2025-09-07T07:46:46.3346094Z  -e GITHUB_RUN_NUMBER \ 2025-09-07T07:46:46.3346296Z  -e GITHUB_RUN_ATTEMPT \ 2025-09-07T07:46:46.3346496Z  -e JOB_ID \ 2025-09-07T07:46:46.3346678Z  -e JOB_NAME \ 2025-09-07T07:46:46.3346851Z  -e BASE_SHA \ 2025-09-07T07:46:46.3347134Z  -e BRANCH \ 2025-09-07T07:46:46.3347305Z  -e SHA1 \ 2025-09-07T07:46:46.3347485Z  -e AWS_DEFAULT_REGION \ 2025-09-07T07:46:46.3347677Z  -e IN_WHEEL_TEST \ 2025-09-07T07:46:46.3347868Z  -e SHARD_NUMBER \ 2025-09-07T07:46:46.3348053Z  -e TEST_CONFIG \ 2025-09-07T07:46:46.3348239Z  -e NUM_TEST_SHARDS \ 2025-09-07T07:46:46.3348429Z  -e REENABLED_ISSUES \ 2025-09-07T07:46:46.3348633Z  -e CONTINUE_THROUGH_ERROR \ 2025-09-07T07:46:46.3348926Z  -e VERBOSE_TEST_LOGS \ 2025-09-07T07:46:46.3349148Z  -e TEST_SHOWLOCALS \ 2025-09-07T07:46:46.3349339Z  -e NO_TEST_TIMEOUT \ 2025-09-07T07:46:46.3349516Z  -e NO_TD \ 2025-09-07T07:46:46.3349704Z  -e TD_DISTRIBUTED \ 2025-09-07T07:46:46.3349889Z  -e PR_LABELS \ 2025-09-07T07:46:46.3350095Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2025-09-07T07:46:46.3350302Z  -e SCCACHE_BUCKET \ 2025-09-07T07:46:46.3350492Z  -e SCCACHE_REGION \ 2025-09-07T07:46:46.3350677Z  -e XLA_CUDA \ 2025-09-07T07:46:46.3350874Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2025-09-07T07:46:46.3351102Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2025-09-07T07:46:46.3351342Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2025-09-07T07:46:46.3351576Z  -e SKIP_SCCACHE_INITIALIZATION=1 \ 2025-09-07T07:46:46.3351797Z  -e HUGGING_FACE_HUB_TOKEN \ 2025-09-07T07:46:46.3352015Z  -e VLLM_TEST_HUGGING_FACE_TOKEN \ 2025-09-07T07:46:46.3352231Z  -e SCRIBE_GRAPHQL_ACCESS_TOKEN \ 2025-09-07T07:46:46.3352444Z  -e DASHBOARD_TAG \ 2025-09-07T07:46:46.3352646Z  -e ARTIFACTS_FILE_SUFFIX \ 2025-09-07T07:46:46.3352895Z  --memory="${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}g" \ 2025-09-07T07:46:46.3353152Z  --memory-swap="${TOTAL_MEMORY_WITH_SWAP}g" \ 2025-09-07T07:46:46.3353417Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2025-09-07T07:46:46.3353681Z  --security-opt seccomp=unconfined \ 2025-09-07T07:46:46.3353897Z  --cap-add=SYS_PTRACE \ 2025-09-07T07:46:46.3354079Z  --ipc=host \ 2025-09-07T07:46:46.3354251Z  ${SHM_OPTS} \ 2025-09-07T07:46:46.3354417Z  --tty \ 2025-09-07T07:46:46.3354579Z  --detach \ 2025-09-07T07:46:46.3354750Z  --name="${container_name}" \ 2025-09-07T07:46:46.3354951Z  ${JENKINS_USER} \ 2025-09-07T07:46:46.3355178Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2025-09-07T07:46:46.3355426Z  -w /var/lib/jenkins/workspace \ 2025-09-07T07:46:46.3355628Z  "${DOCKER_IMAGE}" \ 2025-09-07T07:46:46.3355802Z  ${DOCKER_SHELL_CMD} 2025-09-07T07:46:46.3355977Z ) 2025-09-07T07:46:46.3356173Z # Propagate download.pytorch.org IP to container 2025-09-07T07:46:46.3356566Z grep download.pytorch.org /etc/hosts | docker exec -i "${container_name}" sudo bash -c "/bin/cat >> /etc/hosts" 2025-09-07T07:46:46.3356975Z echo "DOCKER_CONTAINER_ID=${container_name}" >> "${GITHUB_ENV}" 2025-09-07T07:46:46.3357227Z  2025-09-07T07:46:46.3357408Z if [[ ${BUILD_ENVIRONMENT} == *"s390x"* ]]; then 2025-09-07T07:46:46.3357754Z  docker exec -t "${container_name}" sh -c "python3 -m pip install -r .ci/docker/requirements-ci.txt" 2025-09-07T07:46:46.3358058Z fi 2025-09-07T07:46:46.3358198Z  2025-09-07T07:46:46.3358496Z docker exec -t "${container_name}" sh -c "python3 -m pip install $(echo dist/*.whl)[opt-einsum] && ${TEST_COMMAND}" 2025-09-07T07:46:46.3362848Z shell: /usr/bin/bash -e {0} 2025-09-07T07:46:46.3363043Z env: 2025-09-07T07:46:46.3363205Z GIT_DEFAULT_BRANCH: main 2025-09-07T07:46:46.3363434Z BUILD_ENVIRONMENT: linux-jammy-py3.9-gcc11-build 2025-09-07T07:46:46.3363675Z PR_NUMBER: 2025-09-07T07:46:46.3363860Z GITHUB_REPOSITORY: pytorch/pytorch 2025-09-07T07:46:46.3364129Z GITHUB_WORKFLOW: inductor-nightly 2025-09-07T07:46:46.3364333Z GITHUB_JOB: test 2025-09-07T07:46:46.3364509Z GITHUB_RUN_ID: 17525270809 2025-09-07T07:46:46.3364705Z GITHUB_RUN_NUMBER: 298 2025-09-07T07:46:46.3364888Z GITHUB_RUN_ATTEMPT: 1 2025-09-07T07:46:46.3365078Z JOB_ID: 49775559413 2025-09-07T07:46:46.3365505Z JOB_NAME: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:46:46.3365942Z BRANCH: main 2025-09-07T07:46:46.3366218Z SHA1: 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:46:46.3366481Z BASE_SHA: 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:46:46.3366982Z TEST_CONFIG: dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface 2025-09-07T07:46:46.3367281Z SHARD_NUMBER: 1 2025-09-07T07:46:46.3367462Z NUM_TEST_SHARDS: 1 2025-09-07T07:46:46.3367636Z REENABLED_ISSUES: 2025-09-07T07:46:46.3367825Z CONTINUE_THROUGH_ERROR: True 2025-09-07T07:46:46.3368035Z VERBOSE_TEST_LOGS: False 2025-09-07T07:46:46.3368232Z TEST_SHOWLOCALS: False 2025-09-07T07:46:46.3368417Z NO_TEST_TIMEOUT: False 2025-09-07T07:46:46.3368601Z NO_TD: False 2025-09-07T07:46:46.3368776Z TD_DISTRIBUTED: False 2025-09-07T07:46:46.3369003Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2025-09-07T07:46:46.3369248Z SCCACHE_REGION: us-east-1 2025-09-07T07:46:46.3369440Z SHM_SIZE: 1g 2025-09-07T07:46:46.3369982Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:46:46.3370538Z XLA_CUDA: 2025-09-07T07:46:46.3370795Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2025-09-07T07:46:46.3371102Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 0 2025-09-07T07:46:46.3371334Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2025-09-07T07:46:46.3371547Z DASHBOARD_TAG: 2025-09-07T07:46:46.3371951Z VLLM_TEST_HUGGING_FACE_TOKEN: *** 2025-09-07T07:46:46.3372248Z HUGGING_FACE_HUB_TOKEN: *** 2025-09-07T07:46:46.3372529Z SCRIBE_GRAPHQL_ACCESS_TOKEN: *** 2025-09-07T07:46:46.3372941Z ARTIFACTS_FILE_SUFFIX: test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T07:46:46.3373345Z ##[endgroup] 2025-09-07T07:46:46.3395877Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \m\u\l\t\i\g\p\u ]] 2025-09-07T07:46:46.3396320Z + [[ linux-jammy-py3.9-gcc11-build == *onnx* ]] 2025-09-07T07:46:46.3396622Z + TEST_COMMAND=.ci/pytorch/test.sh 2025-09-07T07:46:46.3397056Z ++ awk '/MemTotal/ { printf "%.3f \n", $2/1024/1024 - 1 }' /proc/meminfo 2025-09-07T07:46:46.3418708Z + TOTAL_AVAILABLE_MEMORY_IN_GB='122.780 ' 2025-09-07T07:46:46.3419124Z + TOTAL_MEMORY_WITH_SWAP=125 2025-09-07T07:46:46.3419462Z + [[ linux-jammy-py3.9-gcc11-build == *\s\3\9\0\x* ]] 2025-09-07T07:46:46.3419844Z + SHM_OPTS=--shm-size=1g 2025-09-07T07:46:46.3420130Z + JENKINS_USER='--user jenkins' 2025-09-07T07:46:46.3420773Z + DOCKER_SHELL_CMD= 2025-09-07T07:46:46.3425403Z +++ nproc --ignore=2 2025-09-07T07:46:46.3836347Z ++ docker run -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e GITHUB_REPOSITORY -e GITHUB_WORKFLOW -e GITHUB_JOB -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e JOB_ID -e JOB_NAME -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e REENABLED_ISSUES -e CONTINUE_THROUGH_ERROR -e VERBOSE_TEST_LOGS -e TEST_SHOWLOCALS -e NO_TEST_TIMEOUT -e NO_TD -e TD_DISTRIBUTED -e PR_LABELS -e MAX_JOBS=30 -e SCCACHE_BUCKET -e SCCACHE_REGION -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS -e SKIP_SCCACHE_INITIALIZATION=1 -e HUGGING_FACE_HUB_TOKEN -e VLLM_TEST_HUGGING_FACE_TOKEN -e SCRIBE_GRAPHQL_ACCESS_TOKEN -e DASHBOARD_TAG -e ARTIFACTS_FILE_SUFFIX --memory=122g --memory-swap=125g --env-file=/tmp/github_env_17525270809 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=1g --tty --detach --name= --user jenkins -v /home/ec2-user/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T07:47:00.1243479Z + container_name=f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T07:47:00.1253789Z + grep download.pytorch.org /etc/hosts 2025-09-07T07:47:00.1259366Z + docker exec -i f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc sudo bash -c '/bin/cat >> /etc/hosts' 2025-09-07T07:47:00.2546478Z + echo DOCKER_CONTAINER_ID=f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T07:47:00.2547063Z + [[ linux-jammy-py3.9-gcc11-build == *\s\3\9\0\x* ]] 2025-09-07T07:47:00.2553804Z ++ echo dist/torch-2.9.0a0+git93fb23d-cp39-cp39-linux_x86_64.whl 2025-09-07T07:47:00.2554785Z + docker exec -t f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc sh -c 'python3 -m pip install dist/torch-2.9.0a0+git93fb23d-cp39-cp39-linux_x86_64.whl[opt-einsum] && .ci/pytorch/test.sh' 2025-09-07T07:47:00.6060008Z Processing ./dist/torch-2.9.0a0+git93fb23d-cp39-cp39-linux_x86_64.whl (from torch==2.9.0a0+git93fb23d) 2025-09-07T07:47:00.8198654Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (3.19.1) 2025-09-07T07:47:00.8199565Z Requirement already satisfied: typing-extensions>=4.10.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (4.15.0) 2025-09-07T07:47:00.8204226Z Requirement already satisfied: sympy>=1.13.3 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (1.13.3) 2025-09-07T07:47:00.8204990Z Requirement already satisfied: networkx>=2.5.1 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (2.8.8) 2025-09-07T07:47:00.8205736Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (3.1.6) 2025-09-07T07:47:00.8212520Z Requirement already satisfied: fsspec>=0.8.5 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (2025.3.0) 2025-09-07T07:47:00.8231793Z Requirement already satisfied: opt-einsum>=3.3 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (3.3.0) 2025-09-07T07:47:00.8521753Z Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from opt-einsum>=3.3->torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (1.22.4) 2025-09-07T07:47:00.8537571Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from sympy>=1.13.3->torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (1.3.0) 2025-09-07T07:47:00.8574149Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.9/lib/python3.9/site-packages (from jinja2->torch==2.9.0a0+git93fb23d->torch==2.9.0a0+git93fb23d) (3.0.2) 2025-09-07T07:47:01.6257126Z Installing collected packages: torch 2025-09-07T07:47:08.6268648Z ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 2025-09-07T07:47:08.6269263Z dall-e 0.1 requires torchvision, which is not installed. 2025-09-07T07:47:08.6273267Z effdet 0.4.1 requires torchvision, which is not installed. 2025-09-07T07:47:08.6273878Z pytorch-labs-segment-anything-fast 0.2 requires torchao, which is not installed. 2025-09-07T07:47:08.6277915Z pytorch-labs-segment-anything-fast 0.2 requires torchvision>=0.17.0.dev20231026, which is not installed. 2025-09-07T07:47:08.6278503Z timm 1.0.14 requires torchvision, which is not installed. 2025-09-07T07:47:08.6279315Z Successfully installed torch-2.9.0a0+git93fb23d 2025-09-07T07:47:08.7185883Z + export TERM=vt100 2025-09-07T07:47:08.7186700Z + TERM=vt100 2025-09-07T07:47:08.7187052Z ++ dirname .ci/pytorch/test.sh 2025-09-07T07:47:08.7192419Z + source .ci/pytorch/common.sh 2025-09-07T07:47:08.7196088Z +++ dirname .ci/pytorch/common.sh 2025-09-07T07:47:08.7204802Z ++ source .ci/pytorch/common_utils.sh 2025-09-07T07:47:08.7205095Z +++ declare -f -t trap_add 2025-09-07T07:47:08.7205983Z ++ set -ex -o pipefail 2025-09-07T07:47:08.7206285Z ++ [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-09-07T07:47:08.7206549Z ++ BUILD_TEST_LIBTORCH=0 2025-09-07T07:47:08.7225917Z ++ dirname .ci/pytorch/test.sh 2025-09-07T07:47:08.7229341Z + source .ci/pytorch/common-build.sh 2025-09-07T07:47:08.7229665Z ++ [[ linux-jammy-py3.9-gcc11-build != *win-* ]] 2025-09-07T07:47:08.7234028Z ++++ dirname .ci/pytorch/common-build.sh 2025-09-07T07:47:08.7242784Z +++ cd .ci/pytorch 2025-09-07T07:47:08.7243402Z +++ pwd -P 2025-09-07T07:47:08.7243816Z ++ script_dir=/var/lib/jenkins/workspace/.ci/pytorch 2025-09-07T07:47:08.7244346Z ++ [[ linux-jammy-py3.9-gcc11-build == *-pch* ]] 2025-09-07T07:47:08.7244703Z ++ which sccache 2025-09-07T07:47:08.7268001Z ++ [[ -z ossci-compiler-cache-circleci-v2 ]] 2025-09-07T07:47:08.7268710Z ++ sccache --stop-server 2025-09-07T07:47:08.7291324Z ++ true 2025-09-07T07:47:08.7291556Z ++ rm -f /var/lib/jenkins/sccache_error.log 2025-09-07T07:47:08.7299555Z ++ trap_add sccache_epilogue EXIT 2025-09-07T07:47:08.7299991Z ++ trap_add_cmd=sccache_epilogue 2025-09-07T07:47:08.7300240Z ++ shift 2025-09-07T07:47:08.7300419Z ++ for trap_add_name in "$@" 2025-09-07T07:47:08.7307986Z ++++ trap -p EXIT 2025-09-07T07:47:08.7313557Z +++ eval 'extract_trap_cmd ' 2025-09-07T07:47:08.7318428Z ++++ extract_trap_cmd 2025-09-07T07:47:08.7320615Z ++++ printf '%s\n' '' 2025-09-07T07:47:08.7320844Z +++ printf '%s\n' sccache_epilogue 2025-09-07T07:47:08.7321106Z ++ trap -- ' 2025-09-07T07:47:08.7321305Z sccache_epilogue' EXIT 2025-09-07T07:47:08.7321510Z ++ [[ -n 1 ]] 2025-09-07T07:47:08.7321820Z ++ echo 'Skipping sccache server initialization, setting environment variables' 2025-09-07T07:47:08.7322241Z Skipping sccache server initialization, setting environment variables 2025-09-07T07:47:08.7322553Z ++ export SCCACHE_IDLE_TIMEOUT=0 2025-09-07T07:47:08.7322772Z ++ SCCACHE_IDLE_TIMEOUT=0 2025-09-07T07:47:08.7323026Z ++ export SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-09-07T07:47:08.7323344Z ++ SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-09-07T07:47:08.7323661Z ++ export RUST_LOG=sccache::server=error 2025-09-07T07:47:08.7323899Z ++ RUST_LOG=sccache::server=error 2025-09-07T07:47:08.7324113Z ++ sccache --zero-stats 2025-09-07T07:47:08.8725536Z Statistics zeroed. 2025-09-07T07:47:08.8725811Z ++ which ccache 2025-09-07T07:47:08.8738505Z + [[ linux-jammy-py3.9-gcc11-build != *rocm* ]] 2025-09-07T07:47:08.8738797Z + [[ linux-jammy-py3.9-gcc11-build != *s390x* ]] 2025-09-07T07:47:08.8739101Z + [[ -d /var/lib/jenkins/workspace ]] 2025-09-07T07:47:08.8743303Z ++ stat -c %u /var/lib/jenkins/workspace 2025-09-07T07:47:08.8760240Z + WORKSPACE_ORIGINAL_OWNER_ID=1000 2025-09-07T07:47:08.8760678Z + trap_add cleanup_workspace EXIT 2025-09-07T07:47:08.8761016Z + trap_add_cmd=cleanup_workspace 2025-09-07T07:47:08.8761360Z + shift 2025-09-07T07:47:08.8762196Z + for trap_add_name in "$@" 2025-09-07T07:47:08.8771658Z +++ trap -p EXIT 2025-09-07T07:47:08.8777988Z ++ eval 'extract_trap_cmd trap -- '\'' 2025-09-07T07:47:08.8783743Z sccache_epilogue'\'' EXIT' 2025-09-07T07:47:08.8784489Z +++ extract_trap_cmd trap -- ' 2025-09-07T07:47:08.8784687Z sccache_epilogue' EXIT 2025-09-07T07:47:08.8784860Z +++ printf '%s\n' ' 2025-09-07T07:47:08.8785031Z sccache_epilogue' 2025-09-07T07:47:08.8785199Z ++ printf '%s\n' cleanup_workspace 2025-09-07T07:47:08.8785397Z + trap -- ' 2025-09-07T07:47:08.8785551Z sccache_epilogue 2025-09-07T07:47:08.8785712Z cleanup_workspace' EXIT 2025-09-07T07:47:08.8786177Z + sudo chown -R jenkins /var/lib/jenkins/workspace 2025-09-07T07:47:09.3001017Z + git config --global --add safe.directory /var/lib/jenkins/workspace 2025-09-07T07:47:09.3018794Z + echo 'Environment variables:' 2025-09-07T07:47:09.3019583Z Environment variables: 2025-09-07T07:47:09.3019931Z + env 2025-09-07T07:47:09.3029322Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-09-07T07:47:09.3034147Z CONTINUE_THROUGH_ERROR=True 2025-09-07T07:47:09.3039826Z BUILD_ENVIRONMENT=linux-jammy-py3.9-gcc11-build 2025-09-07T07:47:09.3044741Z VLLM_TEST_HUGGING_FACE_TOKEN=*** 2025-09-07T07:47:09.3045204Z HOSTNAME=f01c480268d5 2025-09-07T07:47:09.3045653Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3046075Z GITHUB_ACTION=__run_2 2025-09-07T07:47:09.3046270Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-09-07T07:47:09.3046491Z GITHUB_RUN_NUMBER=298 2025-09-07T07:47:09.3046827Z TEST_CONFIG=dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface 2025-09-07T07:47:09.3047158Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-09-07T07:47:09.3047402Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-09-07T07:47:09.3047638Z SCCACHE_IDLE_TIMEOUT=0 2025-09-07T07:47:09.3047967Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-09-07T07:47:09.3048198Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-09-07T07:47:09.3048420Z GITHUB_REF_TYPE=branch 2025-09-07T07:47:09.3048655Z BASE_SHA=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3048876Z XLA_CUDA= 2025-09-07T07:47:09.3049049Z NCCL_LIB_DIR=/usr/local/cuda/lib64/ 2025-09-07T07:47:09.3049324Z HUGGING_FACE_HUB_TOKEN=*** 2025-09-07T07:47:09.3058744Z *** 2025-09-07T07:47:09.3058977Z GITHUB_REPOSITORY_ID=65600975 2025-09-07T07:47:09.3059216Z GITHUB_ACTIONS=true 2025-09-07T07:47:09.3059454Z SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-09-07T07:47:09.3059747Z SHA1=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3060020Z GITHUB_SHA=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3060456Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-nightly.yml@refs/heads/main 2025-09-07T07:47:09.3060826Z UCC_HOME=/usr 2025-09-07T07:47:09.3061002Z VERBOSE_TEST_LOGS=False 2025-09-07T07:47:09.3061207Z GITHUB_REF=refs/heads/main 2025-09-07T07:47:09.3061412Z SHARD_NUMBER=1 2025-09-07T07:47:09.3061585Z GITHUB_REF_PROTECTED=true 2025-09-07T07:47:09.3061763Z HOME=/var/lib/jenkins 2025-09-07T07:47:09.3061969Z GITHUB_API_URL=https://api.github.com 2025-09-07T07:47:09.3062202Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-09-07T07:47:09.3062408Z UCX_COMMIT= 2025-09-07T07:47:09.3062557Z USE_SYSTEM_NCCL=1 2025-09-07T07:47:09.3062724Z NUM_TEST_SHARDS=1 2025-09-07T07:47:09.3062885Z UCX_HOME=/usr 2025-09-07T07:47:09.3063241Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3063844Z JOB_NAME=nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:47:09.3064437Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3064911Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-09-07T07:47:09.3065221Z GITHUB_EVENT_NAME=schedule 2025-09-07T07:47:09.3065405Z DASHBOARD_TAG= 2025-09-07T07:47:09.3065564Z GITHUB_RUN_ID=17525270809 2025-09-07T07:47:09.3065746Z INSTALLED_OPENBLAS= 2025-09-07T07:47:09.3066126Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3066535Z GITHUB_ACTOR=pytorchmergebot 2025-09-07T07:47:09.3066720Z PR_NUMBER= 2025-09-07T07:47:09.3066875Z DESIRED_CUDA= 2025-09-07T07:47:09.3067036Z GITHUB_RUN_ATTEMPT=1 2025-09-07T07:47:09.3067219Z ANACONDA_PYTHON_VERSION=3.9 2025-09-07T07:47:09.3067439Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-09-07T07:47:09.3067874Z TERM=vt100 2025-09-07T07:47:09.3068031Z INSTALLED_VISION=yes 2025-09-07T07:47:09.3068201Z BRANCH=main 2025-09-07T07:47:09.3068359Z SCCACHE_REGION=us-east-1 2025-09-07T07:47:09.3068552Z OPENSSL_ROOT_DIR=/opt/openssl 2025-09-07T07:47:09.3068747Z CUDA_PATH=/usr/local/cuda 2025-09-07T07:47:09.3069077Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-09-07T07:47:09.3069423Z GITHUB_SERVER_URL=https://github.com 2025-09-07T07:47:09.3069630Z UCC_COMMIT= 2025-09-07T07:47:09.3069782Z REENABLED_ISSUES= 2025-09-07T07:47:09.3070033Z DOCS=yes 2025-09-07T07:47:09.3070178Z SHLVL=1 2025-09-07T07:47:09.3070325Z MAX_JOBS=30 2025-09-07T07:47:09.3070483Z GITHUB_ACTOR_ID=97764156 2025-09-07T07:47:09.3070707Z GITHUB_WORKFLOW_SHA=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3070954Z GITHUB_REF_NAME=main 2025-09-07T07:47:09.3071217Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-09-07T07:47:09.3071493Z GITHUB_JOB=test 2025-09-07T07:47:09.3071667Z NO_TEST_TIMEOUT=False 2025-09-07T07:47:09.3071838Z TD_DISTRIBUTED=False 2025-09-07T07:47:09.3072024Z GITHUB_REPOSITORY=pytorch/pytorch 2025-09-07T07:47:09.3072229Z GITHUB_RETENTION_DAYS=90 2025-09-07T07:47:09.3072403Z OPENSSL_DIR=/opt/openssl 2025-09-07T07:47:09.3072590Z GITHUB_ACTION_REPOSITORY= 2025-09-07T07:47:09.3073060Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-09-07T07:47:09.3073523Z GITHUB_BASE_REF= 2025-09-07T07:47:09.3073692Z INSTALLED_ACL= 2025-09-07T07:47:09.3074034Z ARTIFACTS_FILE_SUFFIX=test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T07:47:09.3074410Z CI=true 2025-09-07T07:47:09.3074574Z GITHUB_REPOSITORY_OWNER=pytorch 2025-09-07T07:47:09.3074824Z RUST_LOG=sccache::server=error 2025-09-07T07:47:09.3075011Z JOB_ID=49775559413 2025-09-07T07:47:09.3075165Z GITHUB_HEAD_REF= 2025-09-07T07:47:09.3075323Z GITHUB_ACTION_REF= 2025-09-07T07:47:09.3075516Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-09-07T07:47:09.3075730Z TEST_SHOWLOCALS=False 2025-09-07T07:47:09.3075908Z GITHUB_WORKFLOW=inductor-nightly 2025-09-07T07:47:09.3076104Z DEBIAN_FRONTEND=noninteractive 2025-09-07T07:47:09.3076465Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3076812Z NO_TD=False 2025-09-07T07:47:09.3076971Z SKIP_SCCACHE_INITIALIZATION=1 2025-09-07T07:47:09.3077176Z NCCL_INCLUDE_DIR=/usr/local/cuda/include/ 2025-09-07T07:47:09.3077374Z _=/usr/bin/env 2025-09-07T07:47:09.3077575Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2025-09-07T07:47:09.3302928Z + TORCH_INSTALL_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch 2025-09-07T07:47:09.3303390Z + TORCH_BIN_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/bin 2025-09-07T07:47:09.3303774Z + TORCH_LIB_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/lib 2025-09-07T07:47:09.3304226Z + TORCH_TEST_DIR=/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/test 2025-09-07T07:47:09.3304523Z + BUILD_DIR=build 2025-09-07T07:47:09.3304730Z + BUILD_RENAMED_DIR=build_renamed 2025-09-07T07:47:09.3304962Z + BUILD_BIN_DIR=build/bin 2025-09-07T07:47:09.3305179Z + SHARD_NUMBER=1 2025-09-07T07:47:09.3305373Z + NUM_TEST_SHARDS=1 2025-09-07T07:47:09.3305555Z + export TORCH_SERIALIZATION_DEBUG=1 2025-09-07T07:47:09.3305804Z + TORCH_SERIALIZATION_DEBUG=1 2025-09-07T07:47:09.3306023Z + export VALGRIND=ON 2025-09-07T07:47:09.3306222Z + VALGRIND=ON 2025-09-07T07:47:09.3306437Z + [[ linux-jammy-py3.9-gcc11-build == *clang9* ]] 2025-09-07T07:47:09.3306719Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-09-07T07:47:09.3306983Z + detect_cuda_arch 2025-09-07T07:47:09.3307202Z + [[ linux-jammy-py3.9-gcc11-build == *cuda* ]] 2025-09-07T07:47:09.3307442Z + [[ linux-jammy-py3.9-gcc11-build == *s390x* ]] 2025-09-07T07:47:09.3307657Z + [[ 0 == \1 ]] 2025-09-07T07:47:09.3308080Z + [[ True == \1 ]] 2025-09-07T07:47:09.3308269Z + [[ linux-jammy-py3.9-gcc11-build != *bazel* ]] 2025-09-07T07:47:09.3308499Z ++ realpath build/custom_test_artifacts 2025-09-07T07:47:09.3318542Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2025-09-07T07:47:09.3322354Z + [[ -n '' ]] 2025-09-07T07:47:09.3322554Z + echo 'Environment variables' 2025-09-07T07:47:09.3322761Z Environment variables 2025-09-07T07:47:09.3322932Z + env 2025-09-07T07:47:09.3345364Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-09-07T07:47:09.3345743Z CONTINUE_THROUGH_ERROR=True 2025-09-07T07:47:09.3345990Z BUILD_ENVIRONMENT=linux-jammy-py3.9-gcc11-build 2025-09-07T07:47:09.3346464Z VLLM_TEST_HUGGING_FACE_TOKEN=*** 2025-09-07T07:47:09.3346770Z HOSTNAME=f01c480268d5 2025-09-07T07:47:09.3347314Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3347889Z GITHUB_ACTION=__run_2 2025-09-07T07:47:09.3348133Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2025-09-07T07:47:09.3348424Z GITHUB_RUN_NUMBER=298 2025-09-07T07:47:09.3348679Z TEST_CONFIG=dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface 2025-09-07T07:47:09.3348959Z GITHUB_REPOSITORY_OWNER_ID=21003710 2025-09-07T07:47:09.3349184Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2025-09-07T07:47:09.3349391Z SCCACHE_IDLE_TIMEOUT=0 2025-09-07T07:47:09.3349674Z SCRIBE_GRAPHQL_ACCESS_TOKEN=*** 2025-09-07T07:47:09.3349890Z GITHUB_TRIGGERING_ACTOR=pytorchmergebot 2025-09-07T07:47:09.3350116Z GITHUB_REF_TYPE=branch 2025-09-07T07:47:09.3350336Z BASE_SHA=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3350561Z XLA_CUDA= 2025-09-07T07:47:09.3350736Z NCCL_LIB_DIR=/usr/local/cuda/lib64/ 2025-09-07T07:47:09.3351142Z HUGGING_FACE_HUB_TOKEN=*** 2025-09-07T07:47:09.3351389Z *** 2025-09-07T07:47:09.3351534Z GITHUB_REPOSITORY_ID=65600975 2025-09-07T07:47:09.3351722Z GITHUB_ACTIONS=true 2025-09-07T07:47:09.3351926Z SCCACHE_ERROR_LOG=/var/lib/jenkins/sccache_error.log 2025-09-07T07:47:09.3352180Z SHA1=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3352418Z GITHUB_SHA=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3352762Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/inductor-nightly.yml@refs/heads/main 2025-09-07T07:47:09.3353077Z UCC_HOME=/usr 2025-09-07T07:47:09.3353244Z TORCH_SERIALIZATION_DEBUG=1 2025-09-07T07:47:09.3353427Z VERBOSE_TEST_LOGS=False 2025-09-07T07:47:09.3353595Z GITHUB_REF=refs/heads/main 2025-09-07T07:47:09.3353772Z SHARD_NUMBER=1 2025-09-07T07:47:09.3353939Z GITHUB_REF_PROTECTED=true 2025-09-07T07:47:09.3354116Z HOME=/var/lib/jenkins 2025-09-07T07:47:09.3354303Z GITHUB_API_URL=https://api.github.com 2025-09-07T07:47:09.3354521Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2025-09-07T07:47:09.3354713Z UCX_COMMIT= 2025-09-07T07:47:09.3354856Z USE_SYSTEM_NCCL=1 2025-09-07T07:47:09.3355013Z NUM_TEST_SHARDS=1 2025-09-07T07:47:09.3355166Z UCX_HOME=/usr 2025-09-07T07:47:09.3355502Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3356093Z JOB_NAME=nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T07:47:09.3356635Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3357085Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2025-09-07T07:47:09.3357378Z GITHUB_EVENT_NAME=schedule 2025-09-07T07:47:09.3357552Z DASHBOARD_TAG= 2025-09-07T07:47:09.3357700Z GITHUB_RUN_ID=17525270809 2025-09-07T07:47:09.3357872Z INSTALLED_OPENBLAS= 2025-09-07T07:47:09.3358222Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3358606Z GITHUB_ACTOR=pytorchmergebot 2025-09-07T07:47:09.3358777Z PR_NUMBER= 2025-09-07T07:47:09.3359030Z DESIRED_CUDA= 2025-09-07T07:47:09.3359184Z GITHUB_RUN_ATTEMPT=1 2025-09-07T07:47:09.3359347Z VALGRIND=ON 2025-09-07T07:47:09.3359495Z ANACONDA_PYTHON_VERSION=3.9 2025-09-07T07:47:09.3359713Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2025-09-07T07:47:09.3359928Z TERM=vt100 2025-09-07T07:47:09.3360075Z INSTALLED_VISION=yes 2025-09-07T07:47:09.3360228Z BRANCH=main 2025-09-07T07:47:09.3360385Z SCCACHE_REGION=us-east-1 2025-09-07T07:47:09.3360571Z OPENSSL_ROOT_DIR=/opt/openssl 2025-09-07T07:47:09.3360756Z CUDA_PATH=/usr/local/cuda 2025-09-07T07:47:09.3361114Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2025-09-07T07:47:09.3361453Z GITHUB_SERVER_URL=https://github.com 2025-09-07T07:47:09.3361656Z UCC_COMMIT= 2025-09-07T07:47:09.3361802Z REENABLED_ISSUES= 2025-09-07T07:47:09.3361951Z DOCS=yes 2025-09-07T07:47:09.3362094Z SHLVL=1 2025-09-07T07:47:09.3362235Z MAX_JOBS=30 2025-09-07T07:47:09.3362388Z GITHUB_ACTOR_ID=97764156 2025-09-07T07:47:09.3362612Z GITHUB_WORKFLOW_SHA=93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T07:47:09.3362856Z GITHUB_REF_NAME=main 2025-09-07T07:47:09.3363107Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2025-09-07T07:47:09.3363394Z GITHUB_JOB=test 2025-09-07T07:47:09.3363555Z NO_TEST_TIMEOUT=False 2025-09-07T07:47:09.3363716Z TD_DISTRIBUTED=False 2025-09-07T07:47:09.3363895Z GITHUB_REPOSITORY=pytorch/pytorch 2025-09-07T07:47:09.3364099Z GITHUB_RETENTION_DAYS=90 2025-09-07T07:47:09.3364274Z OPENSSL_DIR=/opt/openssl 2025-09-07T07:47:09.3364461Z GITHUB_ACTION_REPOSITORY= 2025-09-07T07:47:09.3364935Z PATH=/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-09-07T07:47:09.3365404Z GITHUB_BASE_REF= 2025-09-07T07:47:09.3365567Z INSTALLED_ACL= 2025-09-07T07:47:09.3365911Z ARTIFACTS_FILE_SUFFIX=test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T07:47:09.3366295Z CI=true 2025-09-07T07:47:09.3366456Z GITHUB_REPOSITORY_OWNER=pytorch 2025-09-07T07:47:09.3366705Z RUST_LOG=sccache::server=error 2025-09-07T07:47:09.3366990Z JOB_ID=49775559413 2025-09-07T07:47:09.3367156Z GITHUB_HEAD_REF= 2025-09-07T07:47:09.3367323Z GITHUB_ACTION_REF= 2025-09-07T07:47:09.3367530Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2025-09-07T07:47:09.3367776Z TEST_SHOWLOCALS=False 2025-09-07T07:47:09.3367980Z GITHUB_WORKFLOW=inductor-nightly 2025-09-07T07:47:09.3368249Z DEBIAN_FRONTEND=noninteractive 2025-09-07T07:47:09.3368634Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_0f632007-13be-4e37-9f89-85d0933c0fe6 2025-09-07T07:47:09.3369003Z NO_TD=False 2025-09-07T07:47:09.3369170Z SKIP_SCCACHE_INITIALIZATION=1 2025-09-07T07:47:09.3369380Z NCCL_INCLUDE_DIR=/usr/local/cuda/include/ 2025-09-07T07:47:09.3369586Z _=/usr/bin/env 2025-09-07T07:47:09.3369745Z + echo 'Testing pytorch' 2025-09-07T07:47:09.3369925Z Testing pytorch 2025-09-07T07:47:09.3370114Z + export LANG=C.UTF-8 2025-09-07T07:47:09.3370286Z + LANG=C.UTF-8 2025-09-07T07:47:09.3370437Z + PR_NUMBER= 2025-09-07T07:47:09.3370711Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \d\e\f\a\u\l\t ]] 2025-09-07T07:47:09.3371112Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \d\i\s\t\r\i\b\u\t\e\d ]] 2025-09-07T07:47:09.3371503Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \s\l\o\w ]] 2025-09-07T07:47:09.3371826Z + [[ linux-jammy-py3.9-gcc11-build == *slow-gradcheck* ]] 2025-09-07T07:47:09.3372099Z + [[ linux-jammy-py3.9-gcc11-build == *cuda* ]] 2025-09-07T07:47:09.3372341Z + [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-09-07T07:47:09.3372584Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-09-07T07:47:09.3372875Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *crossref* ]] 2025-09-07T07:47:09.3373226Z + [[ linux-jammy-py3.9-gcc11-build == *rocm* ]] 2025-09-07T07:47:09.3373467Z + [[ linux-jammy-py3.9-gcc11-build == *xpu* ]] 2025-09-07T07:47:09.3373802Z + [[ linux-jammy-py3.9-gcc11-build != *-bazel-* ]] 2025-09-07T07:47:09.3374028Z + pip_install ninja==1.10.2 2025-09-07T07:47:09.3374278Z + pip_install_pkg='python3 -m pip install --progress-bar off' 2025-09-07T07:47:09.3374582Z + python3 -m pip install --progress-bar off ninja==1.10.2 2025-09-07T07:47:09.7035447Z Collecting ninja==1.10.2 2025-09-07T07:47:09.7152096Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl.metadata (5.0 kB) 2025-09-07T07:47:09.7275301Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2025-09-07T07:47:10.4708786Z Installing collected packages: ninja 2025-09-07T07:47:10.4709476Z Attempting uninstall: ninja 2025-09-07T07:47:10.4720614Z Found existing installation: ninja 1.11.1.3 2025-09-07T07:47:10.4734445Z Uninstalling ninja-1.11.1.3: 2025-09-07T07:47:10.4784026Z Successfully uninstalled ninja-1.11.1.3 2025-09-07T07:47:10.5266909Z Successfully installed ninja-1.10.2 2025-09-07T07:47:10.6171691Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-09-07T07:47:10.6172664Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/opt/conda/envs/py_3.9/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2025-09-07T07:47:10.6173241Z + [[ linux-jammy-py3.9-gcc11-build == *aarch64* ]] 2025-09-07T07:47:10.6173550Z + [[ linux-jammy-py3.9-gcc11-build == *asan* ]] 2025-09-07T07:47:10.6173862Z + [[ linux-jammy-py3.9-gcc11-build == *-debug* ]] 2025-09-07T07:47:10.6174118Z + [[ linux-jammy-py3.9-gcc11-build != *-bazel-* ]] 2025-09-07T07:47:10.6174473Z + echo 'We are not in debug mode: linux-jammy-py3.9-gcc11-build. Expect the assertion to pass' 2025-09-07T07:47:10.6174888Z We are not in debug mode: linux-jammy-py3.9-gcc11-build. Expect the assertion to pass 2025-09-07T07:47:10.6175250Z + cd test 2025-09-07T07:47:10.6175490Z + python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)' 2025-09-07T07:47:10.8895977Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:47:10.8897011Z import pynvml # type: ignore[import] 2025-09-07T07:47:11.7208331Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2025-09-07T07:47:11.7208911Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \n\o\g\p\u\_\A\V\X\5\1\2 ]] 2025-09-07T07:47:11.7209376Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \l\e\g\a\c\y\_\n\v\i\d\i\a\_\d\r\i\v\e\r ]] 2025-09-07T07:47:11.7209783Z + DYNAMO_BENCHMARK_FLAGS=() 2025-09-07T07:47:11.7210156Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *pr_time_benchmarks* ]] 2025-09-07T07:47:11.7210628Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *dynamo_eager* ]] 2025-09-07T07:47:11.7211073Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *aot_eager* ]] 2025-09-07T07:47:11.7211514Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *aot_inductor* ]] 2025-09-07T07:47:11.7211968Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *max_autotune_inductor* ]] 2025-09-07T07:47:11.7212462Z + DYNAMO_BENCHMARK_FLAGS+=(--inductor --inductor-compile-mode max-autotune) 2025-09-07T07:47:11.7212896Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *dynamic* ]] 2025-09-07T07:47:11.7213363Z + DYNAMO_BENCHMARK_FLAGS+=(--dynamic-shapes --dynamic-batch-only) 2025-09-07T07:47:11.7213743Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *cpu* ]] 2025-09-07T07:47:11.7214033Z + DYNAMO_BENCHMARK_FLAGS+=(--device cpu) 2025-09-07T07:47:11.7303142Z + [[ linux-jammy-py3.9-gcc11-build == *libtorch* ]] 2025-09-07T07:47:11.7308762Z + [[ linux-jammy-py3.9-gcc11-build == *-bazel-* ]] 2025-09-07T07:47:11.7309209Z + cd test 2025-09-07T07:47:11.7309554Z + python -c 'import torch; print(torch.__config__.show())' 2025-09-07T07:47:12.0040250Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:47:12.0041183Z import pynvml # type: ignore[import] 2025-09-07T07:47:12.6508139Z PyTorch built with: 2025-09-07T07:47:12.6513083Z - GCC 11.4 2025-09-07T07:47:12.6513372Z - C++ Version: 201703 2025-09-07T07:47:12.6513835Z - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-09-07T07:47:12.6514319Z - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-09-07T07:47:12.6514647Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-09-07T07:47:12.6514886Z - LAPACK is enabled (usually provided by MKL) 2025-09-07T07:47:12.6515102Z - NNPACK is enabled 2025-09-07T07:47:12.6515297Z - CPU capability usage: AVX512 2025-09-07T07:47:12.6518000Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=93fb23d6fae7c4e82c4239a1033e522088742634, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Werror -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.9.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_CUSPARSELT=OFF, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF, 2025-09-07T07:47:12.6520737Z 2025-09-07T07:47:12.8503958Z + cd test 2025-09-07T07:47:12.8504414Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2025-09-07T07:47:13.1192940Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:47:13.1193873Z import pynvml # type: ignore[import] 2025-09-07T07:47:13.7637474Z ATen/Parallel: 2025-09-07T07:47:13.7637951Z at::get_num_threads() : 16 2025-09-07T07:47:13.7638322Z at::get_num_interop_threads() : 16 2025-09-07T07:47:13.7638669Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2025-09-07T07:47:13.7639021Z omp_get_max_threads() : 16 2025-09-07T07:47:13.7639549Z Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications 2025-09-07T07:47:13.7640095Z mkl_get_max_threads() : 16 2025-09-07T07:47:13.7640568Z Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d) 2025-09-07T07:47:13.7640989Z std::thread::hardware_concurrency() : 32 2025-09-07T07:47:13.7641315Z Environment variables: 2025-09-07T07:47:13.7642072Z OMP_NUM_THREADS : [not set] 2025-09-07T07:47:13.7642362Z MKL_NUM_THREADS : [not set] 2025-09-07T07:47:13.7642588Z ATen parallel backend: OpenMP 2025-09-07T07:47:13.7642730Z 2025-09-07T07:47:13.9588030Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *numpy_2* ]] 2025-09-07T07:47:13.9593088Z + [[ linux-jammy-py3.9-gcc11-build == *aarch64* ]] 2025-09-07T07:47:13.9597267Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *backward* ]] 2025-09-07T07:47:13.9601677Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *xla* ]] 2025-09-07T07:47:13.9602174Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *vllm* ]] 2025-09-07T07:47:13.9602591Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *executorch* ]] 2025-09-07T07:47:13.9603250Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == \j\i\t\_\l\e\g\a\c\y ]] 2025-09-07T07:47:13.9603608Z + [[ linux-jammy-py3.9-gcc11-build == *libtorch* ]] 2025-09-07T07:47:13.9603931Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == distributed ]] 2025-09-07T07:47:13.9604318Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *operator_benchmark* ]] 2025-09-07T07:47:13.9604727Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *inductor_distributed* ]] 2025-09-07T07:47:13.9605159Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *inductor-halide* ]] 2025-09-07T07:47:13.9605595Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *inductor-triton-cpu* ]] 2025-09-07T07:47:13.9606061Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *inductor-micro-benchmark* ]] 2025-09-07T07:47:13.9606505Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *huggingface* ]] 2025-09-07T07:47:13.9607056Z + install_torchvision 2025-09-07T07:47:13.9607247Z + local orig_preload 2025-09-07T07:47:13.9607427Z + local commit 2025-09-07T07:47:13.9607606Z ++ get_pinned_commit vision 2025-09-07T07:47:13.9607830Z ++ cat .github/ci_commit_pins/vision.txt 2025-09-07T07:47:14.0065888Z + commit=966da7e46f65d6d49df3e31214470a4fe5cc8e66 2025-09-07T07:47:14.0069865Z + orig_preload= 2025-09-07T07:47:14.0072084Z + '[' -n '' ']' 2025-09-07T07:47:14.0072457Z + [[ linux-jammy-py3.9-gcc11-build == *cuda* ]] 2025-09-07T07:47:14.0077979Z + pip_build_and_install git+https://github.com/pytorch/vision.git@966da7e46f65d6d49df3e31214470a4fe5cc8e66 dist/vision 2025-09-07T07:47:14.0080748Z + local build_target=git+https://github.com/pytorch/vision.git@966da7e46f65d6d49df3e31214470a4fe5cc8e66 2025-09-07T07:47:14.0081171Z + local wheel_dir=dist/vision 2025-09-07T07:47:14.0081372Z + local found_whl=0 2025-09-07T07:47:14.0081549Z + for file in "${wheel_dir}"/*.whl 2025-09-07T07:47:14.0081884Z + [[ -f dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl ]] 2025-09-07T07:47:14.0082182Z + found_whl=1 2025-09-07T07:47:14.0082342Z + break 2025-09-07T07:47:14.0082486Z + '[' 1 == 0 ']' 2025-09-07T07:47:14.0082662Z + for file in "${wheel_dir}"/*.whl 2025-09-07T07:47:14.0082981Z + pip_install_whl dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-09-07T07:47:14.0083404Z + args=('dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl') 2025-09-07T07:47:14.0083713Z + local args 2025-09-07T07:47:14.0084035Z + [[ dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl == *\ * ]] 2025-09-07T07:47:14.0084375Z + for path in "${args[@]}" 2025-09-07T07:47:14.0084703Z + echo 'Installing dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl' 2025-09-07T07:47:14.0085146Z Installing dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-09-07T07:47:14.0085656Z + python3 -mpip install --no-index --no-deps dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-09-07T07:47:14.2862301Z Processing ./dist/vision/torchvision-0.22.0a0+966da7e-cp39-cp39-linux_x86_64.whl 2025-09-07T07:47:14.2930678Z Installing collected packages: torchvision 2025-09-07T07:47:14.8215455Z Successfully installed torchvision-0.22.0a0+966da7e 2025-09-07T07:47:14.8639376Z + '[' -n '' ']' 2025-09-07T07:47:14.8639621Z + id=0 2025-09-07T07:47:14.8639811Z + test_dynamo_benchmark huggingface 0 2025-09-07T07:47:14.8643161Z ++ pwd 2025-09-07T07:47:14.8643811Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2025-09-07T07:47:14.8644133Z + local suite=huggingface 2025-09-07T07:47:14.8644335Z + shift 2025-09-07T07:47:14.8644498Z + local shard_id=0 2025-09-07T07:47:14.8644668Z + shift 2025-09-07T07:47:14.8644936Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *perf_compare* ]] 2025-09-07T07:47:14.8645528Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *perf* ]] 2025-09-07T07:47:14.8645914Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *cpu* ]] 2025-09-07T07:47:14.8646326Z + local dt=float32 2025-09-07T07:47:14.8646590Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *amp* ]] 2025-09-07T07:47:14.8647161Z + dt=amp 2025-09-07T07:47:14.8647427Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *freezing* ]] 2025-09-07T07:47:14.8647855Z + test_single_dynamo_benchmark inference huggingface 0 --inference --amp --freezing 2025-09-07T07:47:14.8648199Z ++ pwd 2025-09-07T07:47:14.8648407Z + TEST_REPORTS_DIR=/var/lib/jenkins/workspace/test/test-reports 2025-09-07T07:47:14.8648759Z + mkdir -p /var/lib/jenkins/workspace/test/test-reports 2025-09-07T07:47:14.8668853Z + local name=inference 2025-09-07T07:47:14.8669261Z + shift 2025-09-07T07:47:14.8669570Z + local suite=huggingface 2025-09-07T07:47:14.8669912Z + shift 2025-09-07T07:47:14.8670174Z + local shard_id=0 2025-09-07T07:47:14.8670464Z + shift 2025-09-07T07:47:14.8670751Z + partition_flags=() 2025-09-07T07:47:14.8671059Z + local partition_flags 2025-09-07T07:47:14.8671771Z + [[ -n 1 ]] 2025-09-07T07:47:14.8672014Z + [[ -n 0 ]] 2025-09-07T07:47:14.8672333Z + partition_flags=(--total-partitions "$NUM_TEST_SHARDS" --partition-id "$shard_id") 2025-09-07T07:47:14.8672788Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *perf_compare* ]] 2025-09-07T07:47:14.8673212Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *perf* ]] 2025-09-07T07:47:14.8673612Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *_avx2* ]] 2025-09-07T07:47:14.8674050Z + [[ dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface == *_avx512* ]] 2025-09-07T07:47:14.8675187Z + python benchmarks/dynamo/huggingface.py --ci --accuracy --timing --explain --print-compilation-time --inductor --inductor-compile-mode max-autotune --dynamic-shapes --dynamic-batch-only --device cpu --inference --amp --freezing --total-partitions 1 --partition-id 0 --output /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv 2025-09-07T07:47:15.5688844Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:47:15.5689650Z import pynvml # type: ignore[import] 2025-09-07T07:47:18.2014225Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:47:18.2015701Z from pkg_resources import resource_filename 2025-09-07T07:47:18.6684856Z 2025-09-07T07:47:18.6725679Z config.json: 0% 0.00/694 [00:00bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3749804Z 2025-09-07T07:50:15.3749931Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3750483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3751015Z layer_outputs = layer_module( 2025-09-07T07:50:15.3751396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3751804Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3752262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3752706Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3753162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3753609Z self_outputs = self.self( 2025-09-07T07:50:15.3754045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3754528Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3755055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3755682Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3755952Z 2025-09-07T07:50:15.3756066Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3756617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3757140Z layer_outputs = layer_module( 2025-09-07T07:50:15.3757513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3757939Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3758395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3758854Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3759304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3759739Z self_outputs = self.self( 2025-09-07T07:50:15.3760174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3760658Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3761201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3761883Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3762150Z 2025-09-07T07:50:15.3762249Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3762487Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3762724Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3762961Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3763259Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3763825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3764364Z layer_outputs = layer_module( 2025-09-07T07:50:15.3764754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3765155Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3765613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3766073Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3766534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3767069Z self_outputs = self.self( 2025-09-07T07:50:15.3767524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 536, in forward 2025-09-07T07:50:15.3768018Z diagonal_mask = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3768580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 834, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3769185Z self._mask_invalid_locations(diagonal_attention_scores, window_overlap) 2025-09-07T07:50:15.3769776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 762, in _mask_invalid_locations 2025-09-07T07:50:15.3770369Z input_tensor[:, :affected_seq_len, :, : affected_seq_len + 1] = torch.full_like( 2025-09-07T07:50:15.3770613Z 2025-09-07T07:50:15.3770715Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3770992Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3771569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3772105Z layer_outputs = layer_module( 2025-09-07T07:50:15.3772499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3772898Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3773368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3773840Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3774308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3774765Z self_outputs = self.self( 2025-09-07T07:50:15.3775203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.3775676Z attn_scores += diagonal_mask 2025-09-07T07:50:15.3775821Z 2025-09-07T07:50:15.3775936Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3776502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3777031Z layer_outputs = layer_module( 2025-09-07T07:50:15.3777444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3777838Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3778285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3778707Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3779176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3779626Z self_outputs = self.self( 2025-09-07T07:50:15.3780057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.3780515Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.3780663Z 2025-09-07T07:50:15.3780759Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3781022Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3781567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3782076Z layer_outputs = layer_module( 2025-09-07T07:50:15.3782436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3782812Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3783258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3783711Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3784172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3784619Z self_outputs = self.self( 2025-09-07T07:50:15.3785057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3785545Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3786122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3786769Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.3787243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.3787951Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.3788109Z 2025-09-07T07:50:15.3788218Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3788932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3789453Z layer_outputs = layer_module( 2025-09-07T07:50:15.3789816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3790192Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3790618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3791049Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3791485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3791912Z self_outputs = self.self( 2025-09-07T07:50:15.3792324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3792801Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3793380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3793937Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.3794474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.3794950Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.3795321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.3795676Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.3795831Z 2025-09-07T07:50:15.3795946Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3796453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3796933Z layer_outputs = layer_module( 2025-09-07T07:50:15.3797287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3797646Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3798062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3798485Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3798891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3799308Z self_outputs = self.self( 2025-09-07T07:50:15.3799705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3800160Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3800691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3801263Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.3801476Z 2025-09-07T07:50:15.3801582Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3802097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3802588Z layer_outputs = layer_module( 2025-09-07T07:50:15.3802955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3803350Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3803805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3804255Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3804679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3805095Z self_outputs = self.self( 2025-09-07T07:50:15.3805500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3805994Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3806560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3807264Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.3807495Z 2025-09-07T07:50:15.3807618Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3808221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3808704Z layer_outputs = layer_module( 2025-09-07T07:50:15.3809058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3809426Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3809876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3810291Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3810707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3811152Z self_outputs = self.self( 2025-09-07T07:50:15.3811553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.3812069Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.3812317Z 2025-09-07T07:50:15.3812402Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3812622Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3812835Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3813045Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3813276Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3813793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3814274Z layer_outputs = layer_module( 2025-09-07T07:50:15.3814627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3814991Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3815405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3815824Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3816240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3816646Z self_outputs = self.self( 2025-09-07T07:50:15.3817041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.3817462Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.3817617Z 2025-09-07T07:50:15.3817694Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3817926Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3818420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3818887Z layer_outputs = layer_module( 2025-09-07T07:50:15.3819224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3819576Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3819987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3820402Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3820814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3821230Z self_outputs = self.self( 2025-09-07T07:50:15.3821623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3822084Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3822566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3823159Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3823419Z 2025-09-07T07:50:15.3823527Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3824076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3824572Z layer_outputs = layer_module( 2025-09-07T07:50:15.3824930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3825279Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3825694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3826103Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3826515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3826909Z self_outputs = self.self( 2025-09-07T07:50:15.3827305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3827749Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3828233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3828786Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3829025Z 2025-09-07T07:50:15.3829132Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3829617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3830078Z layer_outputs = layer_module( 2025-09-07T07:50:15.3830411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3830764Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3831160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3831563Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3831962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3832371Z self_outputs = self.self( 2025-09-07T07:50:15.3832789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3833231Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3833709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3834268Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3834505Z 2025-09-07T07:50:15.3834593Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3834809Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3835040Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3835542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3836067Z layer_outputs = layer_module( 2025-09-07T07:50:15.3836410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3836770Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3837177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3837589Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3838030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3838446Z self_outputs = self.self( 2025-09-07T07:50:15.3838848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.3839266Z attn_scores += diagonal_mask 2025-09-07T07:50:15.3839398Z 2025-09-07T07:50:15.3839510Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3840026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3840515Z layer_outputs = layer_module( 2025-09-07T07:50:15.3840861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3841237Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3841665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3860825Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3861578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3862050Z self_outputs = self.self( 2025-09-07T07:50:15.3862531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.3863003Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.3863163Z 2025-09-07T07:50:15.3863272Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3863541Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3864114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3864651Z layer_outputs = layer_module( 2025-09-07T07:50:15.3865042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3865454Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3865913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3866378Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3866837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3867287Z self_outputs = self.self( 2025-09-07T07:50:15.3867724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3868194Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3868736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3869340Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.3869785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.3870345Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.3870506Z 2025-09-07T07:50:15.3870620Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3871152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3871653Z layer_outputs = layer_module( 2025-09-07T07:50:15.3872098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3872477Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3872908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3873343Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3873777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3874209Z self_outputs = self.self( 2025-09-07T07:50:15.3874617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3875096Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3875646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3876213Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.3876749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.3877239Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.3877592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.3877961Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.3878126Z 2025-09-07T07:50:15.3878237Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3878767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3879264Z layer_outputs = layer_module( 2025-09-07T07:50:15.3879633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3880013Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3880467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3880902Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3881331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3881764Z self_outputs = self.self( 2025-09-07T07:50:15.3882179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3882653Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3883221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3883830Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.3884053Z 2025-09-07T07:50:15.3884163Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3884700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3885263Z layer_outputs = layer_module( 2025-09-07T07:50:15.3885642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3886032Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3886487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3887038Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3887559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3888018Z self_outputs = self.self( 2025-09-07T07:50:15.3888435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3888905Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3889423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3889987Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.3890191Z 2025-09-07T07:50:15.3890305Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3890805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3891285Z layer_outputs = layer_module( 2025-09-07T07:50:15.3891630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3891991Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3892407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3892814Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3893228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3893641Z self_outputs = self.self( 2025-09-07T07:50:15.3894051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.3894570Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.3894808Z 2025-09-07T07:50:15.3894893Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3895112Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3895330Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3895543Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3895776Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3896300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3896790Z layer_outputs = layer_module( 2025-09-07T07:50:15.3897153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3897501Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3897901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3898302Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3898702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3899095Z self_outputs = self.self( 2025-09-07T07:50:15.3899467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.3899911Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.3900080Z 2025-09-07T07:50:15.3900157Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3900393Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3900885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3901341Z layer_outputs = layer_module( 2025-09-07T07:50:15.3901706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3902061Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3902471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3902881Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3903292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3903703Z self_outputs = self.self( 2025-09-07T07:50:15.3904096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3904534Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3905036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3905604Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3905852Z 2025-09-07T07:50:15.3905958Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3906464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3906946Z layer_outputs = layer_module( 2025-09-07T07:50:15.3907290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3907641Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3908057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3908465Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3908880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3909285Z self_outputs = self.self( 2025-09-07T07:50:15.3909669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3910107Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3910607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3911186Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3911425Z 2025-09-07T07:50:15.3911535Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3912037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3912542Z layer_outputs = layer_module( 2025-09-07T07:50:15.3912912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3913284Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3913708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3914154Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3914588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3915017Z self_outputs = self.self( 2025-09-07T07:50:15.3915431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3915916Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3916425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3917009Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3917261Z 2025-09-07T07:50:15.3917345Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3917567Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3917800Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3918320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3918806Z layer_outputs = layer_module( 2025-09-07T07:50:15.3919160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3919531Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3919946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3920371Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3920795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3921214Z self_outputs = self.self( 2025-09-07T07:50:15.3921620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.3922029Z attn_scores += diagonal_mask 2025-09-07T07:50:15.3922162Z 2025-09-07T07:50:15.3922270Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3922791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3923278Z layer_outputs = layer_module( 2025-09-07T07:50:15.3923632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3923990Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3924414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3924839Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3925267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3925698Z self_outputs = self.self( 2025-09-07T07:50:15.3926128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.3926579Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.3926726Z 2025-09-07T07:50:15.3926903Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3927174Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3927710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3928233Z layer_outputs = layer_module( 2025-09-07T07:50:15.3928639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3929009Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3929448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3929897Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3930351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3930770Z self_outputs = self.self( 2025-09-07T07:50:15.3931181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3931681Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3932231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3932824Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.3933261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.3933623Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.3933777Z 2025-09-07T07:50:15.3933891Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3934408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3934898Z layer_outputs = layer_module( 2025-09-07T07:50:15.3935250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3935630Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3936040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3936462Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3936888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3937307Z self_outputs = self.self( 2025-09-07T07:50:15.3937716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3938177Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3938704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3939264Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.3939814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.3940314Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.3940676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.3941062Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.3941224Z 2025-09-07T07:50:15.3941332Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3941859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3942335Z layer_outputs = layer_module( 2025-09-07T07:50:15.3942673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3943036Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3943479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3943891Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3944304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3944705Z self_outputs = self.self( 2025-09-07T07:50:15.3945362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3945825Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3946342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3946904Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.3947114Z 2025-09-07T07:50:15.3947221Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3947736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3948217Z layer_outputs = layer_module( 2025-09-07T07:50:15.3948567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3948930Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3949336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3949748Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3950163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3950579Z self_outputs = self.self( 2025-09-07T07:50:15.3950967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3951414Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3951932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3952497Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.3952706Z 2025-09-07T07:50:15.3952818Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3953332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3953825Z layer_outputs = layer_module( 2025-09-07T07:50:15.3954159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3954510Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3954911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3955304Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3955707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3956102Z self_outputs = self.self( 2025-09-07T07:50:15.3956484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.3956987Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.3957215Z 2025-09-07T07:50:15.3957297Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3957556Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3957761Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3957966Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3958185Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3958685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3959153Z layer_outputs = layer_module( 2025-09-07T07:50:15.3959561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3959923Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3960327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3960739Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3961157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3961569Z self_outputs = self.self( 2025-09-07T07:50:15.3961960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.3962371Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.3962520Z 2025-09-07T07:50:15.3962599Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3962844Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3963364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3963854Z layer_outputs = layer_module( 2025-09-07T07:50:15.3964203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3964574Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3964998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3965424Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3965838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3966251Z self_outputs = self.self( 2025-09-07T07:50:15.3966683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3967222Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3967761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3968379Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3968654Z 2025-09-07T07:50:15.3968768Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3969305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3969792Z layer_outputs = layer_module( 2025-09-07T07:50:15.3970172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3970559Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3971007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3971457Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3971903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3972383Z self_outputs = self.self( 2025-09-07T07:50:15.3972812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3973304Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3973856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3974513Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3974778Z 2025-09-07T07:50:15.3974900Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3975446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3975939Z layer_outputs = layer_module( 2025-09-07T07:50:15.3976289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3976655Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3977075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3977470Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3977877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3978272Z self_outputs = self.self( 2025-09-07T07:50:15.3978657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.3979086Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.3979561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.3980124Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.3980367Z 2025-09-07T07:50:15.3980447Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3980658Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3980888Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3981380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3981848Z layer_outputs = layer_module( 2025-09-07T07:50:15.3982188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3982553Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3982967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3983394Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3983821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3984232Z self_outputs = self.self( 2025-09-07T07:50:15.3984632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.3985039Z attn_scores += diagonal_mask 2025-09-07T07:50:15.3985166Z 2025-09-07T07:50:15.3985268Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3985762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3986227Z layer_outputs = layer_module( 2025-09-07T07:50:15.3986596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3986938Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3987340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3987741Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3988196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3988595Z self_outputs = self.self( 2025-09-07T07:50:15.3988975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.3989380Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.3989521Z 2025-09-07T07:50:15.3989600Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.3989839Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3990334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3990817Z layer_outputs = layer_module( 2025-09-07T07:50:15.3991153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3991507Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3991911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3992307Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3992710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3993106Z self_outputs = self.self( 2025-09-07T07:50:15.3993498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.3993938Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.3994440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.3995006Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.3995441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.3995803Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.3995961Z 2025-09-07T07:50:15.3996075Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.3996591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.3997091Z layer_outputs = layer_module( 2025-09-07T07:50:15.3997428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.3997800Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.3998226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.3998645Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.3999075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.3999492Z self_outputs = self.self( 2025-09-07T07:50:15.3999900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4000366Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4000959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4001522Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4002043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4002520Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4002896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4003255Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4003415Z 2025-09-07T07:50:15.4003521Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4004076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4004587Z layer_outputs = layer_module( 2025-09-07T07:50:15.4004946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4005339Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4005786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4006245Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4006696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4007227Z self_outputs = self.self( 2025-09-07T07:50:15.4007682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4008198Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4008771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4009372Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4009595Z 2025-09-07T07:50:15.4009702Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4010224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4010715Z layer_outputs = layer_module( 2025-09-07T07:50:15.4011072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4011472Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4011915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4012379Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4012800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4013215Z self_outputs = self.self( 2025-09-07T07:50:15.4013611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4014072Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4014602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4015176Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4015384Z 2025-09-07T07:50:15.4015494Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4016049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4016515Z layer_outputs = layer_module( 2025-09-07T07:50:15.4016855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4017216Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4017671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4018089Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4018514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4018929Z self_outputs = self.self( 2025-09-07T07:50:15.4019335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4019856Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4020091Z 2025-09-07T07:50:15.4020174Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4020391Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4020599Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4020798Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4021025Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4021527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4022005Z layer_outputs = layer_module( 2025-09-07T07:50:15.4022349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4022711Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4023118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4023530Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4023940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4024349Z self_outputs = self.self( 2025-09-07T07:50:15.4024747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4025156Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4025302Z 2025-09-07T07:50:15.4025381Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4025617Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4026117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4026595Z layer_outputs = layer_module( 2025-09-07T07:50:15.4026933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4027296Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4027714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4028123Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4028526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4028933Z self_outputs = self.self( 2025-09-07T07:50:15.4029325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4029834Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4030361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4030947Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4031204Z 2025-09-07T07:50:15.4031319Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4031865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4032347Z layer_outputs = layer_module( 2025-09-07T07:50:15.4032707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4033067Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4033495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4033923Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4034354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4034774Z self_outputs = self.self( 2025-09-07T07:50:15.4035175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4035628Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4036137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4036728Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4036978Z 2025-09-07T07:50:15.4037095Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4037614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4038104Z layer_outputs = layer_module( 2025-09-07T07:50:15.4038462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4038835Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4039264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4039683Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4040113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4040536Z self_outputs = self.self( 2025-09-07T07:50:15.4040951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4041398Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4041898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4042490Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4042741Z 2025-09-07T07:50:15.4042835Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4043054Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4043293Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4043809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4044334Z layer_outputs = layer_module( 2025-09-07T07:50:15.4044690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4045240Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4045678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4046108Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4046610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4047110Z self_outputs = self.self( 2025-09-07T07:50:15.4047543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4047979Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4048121Z 2025-09-07T07:50:15.4048238Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4048764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4049243Z layer_outputs = layer_module( 2025-09-07T07:50:15.4049588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4049950Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4050381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4050805Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4051229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4051646Z self_outputs = self.self( 2025-09-07T07:50:15.4052046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4052459Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4052601Z 2025-09-07T07:50:15.4052683Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4052920Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4053421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4053896Z layer_outputs = layer_module( 2025-09-07T07:50:15.4054238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4054600Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4055014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4055417Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4055827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4056233Z self_outputs = self.self( 2025-09-07T07:50:15.4056625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4057080Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4057596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4058192Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4058634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4059065Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4059229Z 2025-09-07T07:50:15.4059357Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4059905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4060393Z layer_outputs = layer_module( 2025-09-07T07:50:15.4060782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4061152Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4061557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4061950Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4062350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4062756Z self_outputs = self.self( 2025-09-07T07:50:15.4063138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4063581Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4064101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4064651Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4065141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4065592Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4065917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4066258Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4066415Z 2025-09-07T07:50:15.4066518Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4067021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4067498Z layer_outputs = layer_module( 2025-09-07T07:50:15.4067848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4068230Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4068643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4069055Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4069466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4069879Z self_outputs = self.self( 2025-09-07T07:50:15.4070282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4070745Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4071273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4071838Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4072050Z 2025-09-07T07:50:15.4072156Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4072672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4073193Z layer_outputs = layer_module( 2025-09-07T07:50:15.4073548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4073918Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4074341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4074787Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4075275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4075717Z self_outputs = self.self( 2025-09-07T07:50:15.4076153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4076649Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4077200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4077769Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4077979Z 2025-09-07T07:50:15.4078096Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4078622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4079118Z layer_outputs = layer_module( 2025-09-07T07:50:15.4079476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4079849Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4080276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4080704Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4081125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4081544Z self_outputs = self.self( 2025-09-07T07:50:15.4081950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4082503Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4082761Z 2025-09-07T07:50:15.4082856Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4083085Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4083315Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4083542Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4083793Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4084352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4084884Z layer_outputs = layer_module( 2025-09-07T07:50:15.4085267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4085662Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4086113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4086558Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4087087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4087537Z self_outputs = self.self( 2025-09-07T07:50:15.4087963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4088448Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4088602Z 2025-09-07T07:50:15.4088684Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4088930Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4089454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4089951Z layer_outputs = layer_module( 2025-09-07T07:50:15.4090340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4090721Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4091153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4091582Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4092018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4092435Z self_outputs = self.self( 2025-09-07T07:50:15.4092824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4093260Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4093752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4094322Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4094562Z 2025-09-07T07:50:15.4094666Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4095167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4095639Z layer_outputs = layer_module( 2025-09-07T07:50:15.4095981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4096339Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4096741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4097150Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4097563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4097967Z self_outputs = self.self( 2025-09-07T07:50:15.4098348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4098779Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4099266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4099830Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4100067Z 2025-09-07T07:50:15.4100178Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4100690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4101165Z layer_outputs = layer_module( 2025-09-07T07:50:15.4101518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4101888Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4102309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4102748Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4103159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4103563Z self_outputs = self.self( 2025-09-07T07:50:15.4103956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4104437Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4104928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4105481Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4105719Z 2025-09-07T07:50:15.4105799Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4106012Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4106242Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4106729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4107192Z layer_outputs = layer_module( 2025-09-07T07:50:15.4107534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4107898Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4108322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4108734Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4109155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4109578Z self_outputs = self.self( 2025-09-07T07:50:15.4109983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4110387Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4110519Z 2025-09-07T07:50:15.4110626Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4111146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4111630Z layer_outputs = layer_module( 2025-09-07T07:50:15.4111982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4112343Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4112769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4113194Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4113614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4114029Z self_outputs = self.self( 2025-09-07T07:50:15.4114425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4114852Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4114999Z 2025-09-07T07:50:15.4115084Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4115330Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4115849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4116329Z layer_outputs = layer_module( 2025-09-07T07:50:15.4116722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4117091Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4117518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4117923Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4118372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4118797Z self_outputs = self.self( 2025-09-07T07:50:15.4119200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4119672Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4120196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4120793Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4121227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4121594Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4121758Z 2025-09-07T07:50:15.4121866Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4122364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4122842Z layer_outputs = layer_module( 2025-09-07T07:50:15.4123194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4123562Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4124011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4124446Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4124897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4125345Z self_outputs = self.self( 2025-09-07T07:50:15.4125770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4126249Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4126864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4127464Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4128013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4128486Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4128826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4129165Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4129324Z 2025-09-07T07:50:15.4129428Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4129936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4130418Z layer_outputs = layer_module( 2025-09-07T07:50:15.4130763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4131112Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4131565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4131976Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4132390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4132791Z self_outputs = self.self( 2025-09-07T07:50:15.4133215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4133669Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4134191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4134746Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4134955Z 2025-09-07T07:50:15.4135062Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4135570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4136041Z layer_outputs = layer_module( 2025-09-07T07:50:15.4136388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4136763Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4137185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4137610Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4138033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4138459Z self_outputs = self.self( 2025-09-07T07:50:15.4138857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4139310Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4139839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4140404Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4140612Z 2025-09-07T07:50:15.4140728Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4141255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4141743Z layer_outputs = layer_module( 2025-09-07T07:50:15.4142096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4142468Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4142897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4143319Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4143739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4144160Z self_outputs = self.self( 2025-09-07T07:50:15.4144570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4145264Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4145517Z 2025-09-07T07:50:15.4145612Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4145895Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4146114Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4146328Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4146569Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4147087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4147577Z layer_outputs = layer_module( 2025-09-07T07:50:15.4147982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4148357Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4148787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4149200Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4149624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4150022Z self_outputs = self.self( 2025-09-07T07:50:15.4150407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4150820Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4150956Z 2025-09-07T07:50:15.4151033Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4151265Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4151746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4152209Z layer_outputs = layer_module( 2025-09-07T07:50:15.4152538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4152893Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4153294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4153696Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4154099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4154488Z self_outputs = self.self( 2025-09-07T07:50:15.4154884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4155312Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4155790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4156353Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4156591Z 2025-09-07T07:50:15.4156692Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4157192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4157670Z layer_outputs = layer_module( 2025-09-07T07:50:15.4158018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4158381Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4158803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4159206Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4159607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4160042Z self_outputs = self.self( 2025-09-07T07:50:15.4160441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4160873Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4161364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4161982Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4162229Z 2025-09-07T07:50:15.4162341Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4162849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4163332Z layer_outputs = layer_module( 2025-09-07T07:50:15.4163690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4164062Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4164487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4164909Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4165330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4165745Z self_outputs = self.self( 2025-09-07T07:50:15.4166151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4166611Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4167161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4167764Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4168017Z 2025-09-07T07:50:15.4168101Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4168324Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4168568Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4169085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4169580Z layer_outputs = layer_module( 2025-09-07T07:50:15.4169939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4170315Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4170745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4171167Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4171597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4172019Z self_outputs = self.self( 2025-09-07T07:50:15.4172427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4172848Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4172975Z 2025-09-07T07:50:15.4173081Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4173608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4174103Z layer_outputs = layer_module( 2025-09-07T07:50:15.4174501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4174868Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4175298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4175727Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4176204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4176635Z self_outputs = self.self( 2025-09-07T07:50:15.4177028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4177444Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4177588Z 2025-09-07T07:50:15.4177668Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4177909Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4178416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4178888Z layer_outputs = layer_module( 2025-09-07T07:50:15.4179264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4179645Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4180072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4180498Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4180786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4180857Z self_outputs = self.self( 2025-09-07T07:50:15.4181143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4181265Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4181618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4181791Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4181988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4182097Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4182100Z 2025-09-07T07:50:15.4182205Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4182557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4182632Z layer_outputs = layer_module( 2025-09-07T07:50:15.4182861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4182941Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4183216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4183299Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4183578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4183656Z self_outputs = self.self( 2025-09-07T07:50:15.4183931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4184047Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4184460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4184595Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4184914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4185006Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4185239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4185338Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4185342Z 2025-09-07T07:50:15.4185448Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4185803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4185879Z layer_outputs = layer_module( 2025-09-07T07:50:15.4186107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4186188Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4186479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4186561Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4186848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4186927Z self_outputs = self.self( 2025-09-07T07:50:15.4187201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4187323Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4187675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4187827Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4187838Z 2025-09-07T07:50:15.4187940Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4188295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4188375Z layer_outputs = layer_module( 2025-09-07T07:50:15.4188597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4188683Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4188959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4189040Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4189327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4189397Z self_outputs = self.self( 2025-09-07T07:50:15.4189677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4189793Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4190146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4190297Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4190300Z 2025-09-07T07:50:15.4190408Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4190800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4190877Z layer_outputs = layer_module( 2025-09-07T07:50:15.4191108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4191189Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4191506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4191587Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4191873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4191955Z self_outputs = self.self( 2025-09-07T07:50:15.4192235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4192428Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4192431Z 2025-09-07T07:50:15.4192511Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4192588Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4192670Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4192747Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4192857Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4193193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4193264Z layer_outputs = layer_module( 2025-09-07T07:50:15.4193483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4193564Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4193840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4193916Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4194190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4194259Z self_outputs = self.self( 2025-09-07T07:50:15.4194527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4194620Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4194623Z 2025-09-07T07:50:15.4194699Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4194804Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4195139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4195212Z layer_outputs = layer_module( 2025-09-07T07:50:15.4195433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4195511Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4195789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4195865Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4196138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4196208Z self_outputs = self.self( 2025-09-07T07:50:15.4196481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4196638Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4196978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4197169Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4197173Z 2025-09-07T07:50:15.4197275Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4197707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4197782Z layer_outputs = layer_module( 2025-09-07T07:50:15.4198008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4198091Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4198360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4198441Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4198708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4198776Z self_outputs = self.self( 2025-09-07T07:50:15.4199049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4199151Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4199480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4199658Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4199663Z 2025-09-07T07:50:15.4199771Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4200104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4200172Z layer_outputs = layer_module( 2025-09-07T07:50:15.4200394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4200472Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4200745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4200819Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4201085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4201161Z self_outputs = self.self( 2025-09-07T07:50:15.4201429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4201538Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4201862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4202045Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4202048Z 2025-09-07T07:50:15.4202125Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4202203Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4202309Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4202646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4202755Z layer_outputs = layer_module( 2025-09-07T07:50:15.4202964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4203047Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4203322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4203396Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4203707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4203779Z self_outputs = self.self( 2025-09-07T07:50:15.4204055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4204129Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4204132Z 2025-09-07T07:50:15.4204240Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4204587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4204656Z layer_outputs = layer_module( 2025-09-07T07:50:15.4204882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4204960Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4205251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4205330Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4205614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4205693Z self_outputs = self.self( 2025-09-07T07:50:15.4205977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4206068Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4206071Z 2025-09-07T07:50:15.4206152Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4206257Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4206617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4206690Z layer_outputs = layer_module( 2025-09-07T07:50:15.4206991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4207078Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4207367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4207453Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4207750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4207835Z self_outputs = self.self( 2025-09-07T07:50:15.4208135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4208259Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4208600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4208778Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4208987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4209126Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4209130Z 2025-09-07T07:50:15.4209248Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4209638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4209722Z layer_outputs = layer_module( 2025-09-07T07:50:15.4210980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4211070Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4211368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4211447Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4211741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4211818Z self_outputs = self.self( 2025-09-07T07:50:15.4212106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4212235Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4212595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4212749Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4213072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4213175Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4213370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4213476Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4213479Z 2025-09-07T07:50:15.4213593Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4213951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4214034Z layer_outputs = layer_module( 2025-09-07T07:50:15.4214263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4214353Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4214636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4214716Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4215004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4215080Z self_outputs = self.self( 2025-09-07T07:50:15.4215370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4215488Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4215849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4216013Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4216016Z 2025-09-07T07:50:15.4216122Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4216486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4216600Z layer_outputs = layer_module( 2025-09-07T07:50:15.4216835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4216915Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4217199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4217282Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4217595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4217677Z self_outputs = self.self( 2025-09-07T07:50:15.4217956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4218078Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4218429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4218585Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4218589Z 2025-09-07T07:50:15.4218702Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4219057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4219138Z layer_outputs = layer_module( 2025-09-07T07:50:15.4219361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4219447Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4219734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4219811Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4220085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4220152Z self_outputs = self.self( 2025-09-07T07:50:15.4220424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4220607Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4220610Z 2025-09-07T07:50:15.4220689Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4220775Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4220851Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4220934Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4221035Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4221369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4221449Z layer_outputs = layer_module( 2025-09-07T07:50:15.4221668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4221755Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4222035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4222117Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4222386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4222458Z self_outputs = self.self( 2025-09-07T07:50:15.4222730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4222858Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4222861Z 2025-09-07T07:50:15.4222946Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4223045Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4223386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4223465Z layer_outputs = layer_module( 2025-09-07T07:50:15.4223711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4223799Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4224071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4224155Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4224428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4224499Z self_outputs = self.self( 2025-09-07T07:50:15.4224773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4224875Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4225219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4225397Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4225401Z 2025-09-07T07:50:15.4225510Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4225848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4225919Z layer_outputs = layer_module( 2025-09-07T07:50:15.4226141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4226218Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4226501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4226578Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4226854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4226932Z self_outputs = self.self( 2025-09-07T07:50:15.4227206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4227313Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4227657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4227841Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4227845Z 2025-09-07T07:50:15.4227944Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4228281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4228357Z layer_outputs = layer_module( 2025-09-07T07:50:15.4228571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4228652Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4228921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4229031Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4229311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4229379Z self_outputs = self.self( 2025-09-07T07:50:15.4229652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4229786Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4230117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4230289Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4230292Z 2025-09-07T07:50:15.4230371Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4230458Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4230557Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4230902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4230972Z layer_outputs = layer_module( 2025-09-07T07:50:15.4231193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4231273Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4231540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4231621Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4231887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4231965Z self_outputs = self.self( 2025-09-07T07:50:15.4232234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4232306Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4232310Z 2025-09-07T07:50:15.4232416Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4232752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4232827Z layer_outputs = layer_module( 2025-09-07T07:50:15.4233038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4233121Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4233389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4233467Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4233742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4233810Z self_outputs = self.self( 2025-09-07T07:50:15.4234079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4234157Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4234163Z 2025-09-07T07:50:15.4234241Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4234346Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4234678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4234753Z layer_outputs = layer_module( 2025-09-07T07:50:15.4234999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4235083Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4235351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4235424Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4235727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4235797Z self_outputs = self.self( 2025-09-07T07:50:15.4236067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4236178Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4236514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4236692Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4236876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4236979Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4236982Z 2025-09-07T07:50:15.4237082Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4237433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4237504Z layer_outputs = layer_module( 2025-09-07T07:50:15.4237731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4237814Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4238083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4238164Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4238431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4238500Z self_outputs = self.self( 2025-09-07T07:50:15.4238773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4238892Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4239236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4239375Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4239687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4239776Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4239959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4240065Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4240068Z 2025-09-07T07:50:15.4240168Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4240509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4240583Z layer_outputs = layer_module( 2025-09-07T07:50:15.4240807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4240889Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4241202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4241286Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4241562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4241641Z self_outputs = self.self( 2025-09-07T07:50:15.4241948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4242066Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4242414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4242563Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4242569Z 2025-09-07T07:50:15.4242680Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4243031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4243112Z layer_outputs = layer_module( 2025-09-07T07:50:15.4243339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4243423Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4243716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4243795Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4244090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4244165Z self_outputs = self.self( 2025-09-07T07:50:15.4244458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4244575Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4244934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4245262Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4245268Z 2025-09-07T07:50:15.4245377Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4245740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4245816Z layer_outputs = layer_module( 2025-09-07T07:50:15.4246049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4246134Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4246415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4246505Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4246845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4246942Z self_outputs = self.self( 2025-09-07T07:50:15.4247243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4247445Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4247450Z 2025-09-07T07:50:15.4247548Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4247719Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4247811Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4247892Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4248006Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4248392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4248468Z layer_outputs = layer_module( 2025-09-07T07:50:15.4248749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4248832Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4249124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4249204Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4249489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4249571Z self_outputs = self.self( 2025-09-07T07:50:15.4249852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4249947Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4249950Z 2025-09-07T07:50:15.4250030Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4250142Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4250524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4250603Z layer_outputs = layer_module( 2025-09-07T07:50:15.4250847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4250937Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4251244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4251327Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4251626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4251709Z self_outputs = self.self( 2025-09-07T07:50:15.4252012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4252121Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4252457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4252638Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4252652Z 2025-09-07T07:50:15.4252754Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4253099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4253177Z layer_outputs = layer_module( 2025-09-07T07:50:15.4253399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4253484Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4253758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4253833Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4254114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4254227Z self_outputs = self.self( 2025-09-07T07:50:15.4254505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4254609Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4254948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4255162Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4255166Z 2025-09-07T07:50:15.4255270Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4255625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4255699Z layer_outputs = layer_module( 2025-09-07T07:50:15.4255924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4256001Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4256275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4256359Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4256635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4256715Z self_outputs = self.self( 2025-09-07T07:50:15.4256990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4257104Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4257443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4257633Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4257636Z 2025-09-07T07:50:15.4257726Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4257809Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4257922Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4258280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4258362Z layer_outputs = layer_module( 2025-09-07T07:50:15.4258585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4258667Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4258961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4259045Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4259340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4259413Z self_outputs = self.self( 2025-09-07T07:50:15.4259695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4259780Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4259783Z 2025-09-07T07:50:15.4259888Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4260255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4260324Z layer_outputs = layer_module( 2025-09-07T07:50:15.4260582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4260660Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4260940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4261026Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4261341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4261421Z self_outputs = self.self( 2025-09-07T07:50:15.4261699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4261782Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4261786Z 2025-09-07T07:50:15.4261873Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4261981Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4262339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4262411Z layer_outputs = layer_module( 2025-09-07T07:50:15.4262640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4262719Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4263005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4263091Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4263370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4263449Z self_outputs = self.self( 2025-09-07T07:50:15.4263730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4263849Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4264212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4264386Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4264591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4264694Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4264698Z 2025-09-07T07:50:15.4264809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4265158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4265234Z layer_outputs = layer_module( 2025-09-07T07:50:15.4265463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4265543Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4265830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4265907Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4266188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4266268Z self_outputs = self.self( 2025-09-07T07:50:15.4266544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4266673Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4267060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4267205Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4267528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4267622Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4267864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4267966Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4267970Z 2025-09-07T07:50:15.4268082Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4268434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4268521Z layer_outputs = layer_module( 2025-09-07T07:50:15.4268747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4268832Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4269128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4269211Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4269502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4269577Z self_outputs = self.self( 2025-09-07T07:50:15.4269862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4269991Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4270349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4270503Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4270507Z 2025-09-07T07:50:15.4270611Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4270958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4271030Z layer_outputs = layer_module( 2025-09-07T07:50:15.4271244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4271333Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4271601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4271688Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4271956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4272034Z self_outputs = self.self( 2025-09-07T07:50:15.4272301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4272418Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4272763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4272912Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4272916Z 2025-09-07T07:50:15.4273026Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4273391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4273468Z layer_outputs = layer_module( 2025-09-07T07:50:15.4273684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4273763Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4274063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4274140Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4274416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4274485Z self_outputs = self.self( 2025-09-07T07:50:15.4274753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4274943Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4274947Z 2025-09-07T07:50:15.4275026Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4275110Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4275187Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4275262Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4275372Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4275710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4275789Z layer_outputs = layer_module( 2025-09-07T07:50:15.4276006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4276094Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4276380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4276454Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4276729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4276797Z self_outputs = self.self( 2025-09-07T07:50:15.4277074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4277158Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4277162Z 2025-09-07T07:50:15.4277237Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4277344Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4277677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4277757Z layer_outputs = layer_module( 2025-09-07T07:50:15.4277970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4278054Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4278325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4278398Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4278673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4278742Z self_outputs = self.self( 2025-09-07T07:50:15.4279019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4279157Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4279484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4279669Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4279673Z 2025-09-07T07:50:15.4279775Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4280154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4280229Z layer_outputs = layer_module( 2025-09-07T07:50:15.4280448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4280529Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4280805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4280886Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4281160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4281235Z self_outputs = self.self( 2025-09-07T07:50:15.4281513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4281611Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4281938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4282111Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4282117Z 2025-09-07T07:50:15.4282224Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4282564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4282643Z layer_outputs = layer_module( 2025-09-07T07:50:15.4282856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4282937Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4283217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4283292Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4283575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4283650Z self_outputs = self.self( 2025-09-07T07:50:15.4283935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4284039Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4284379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4284572Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4284575Z 2025-09-07T07:50:15.4284657Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4284744Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4284847Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4285204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4285341Z layer_outputs = layer_module( 2025-09-07T07:50:15.4285563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4285652Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4285934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4286018Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4286339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4286413Z self_outputs = self.self( 2025-09-07T07:50:15.4286701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4286963Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4286969Z 2025-09-07T07:50:15.4287098Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4287472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4287558Z layer_outputs = layer_module( 2025-09-07T07:50:15.4287807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4287893Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4288205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4288289Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4288599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4288671Z self_outputs = self.self( 2025-09-07T07:50:15.4288954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4289045Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4289049Z 2025-09-07T07:50:15.4289131Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4289247Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4289599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4289685Z layer_outputs = layer_module( 2025-09-07T07:50:15.4289909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4289990Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4290283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4290363Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4290657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4290735Z self_outputs = self.self( 2025-09-07T07:50:15.4291014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4291142Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4291498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4291686Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4291879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4292038Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4292042Z 2025-09-07T07:50:15.4292148Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4292502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4292583Z layer_outputs = layer_module( 2025-09-07T07:50:15.4292845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4292935Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4293219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4293298Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4293588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4293664Z self_outputs = self.self( 2025-09-07T07:50:15.4293952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4294072Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4294433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4294574Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4294897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4294999Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4295194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4295305Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4295308Z 2025-09-07T07:50:15.4295413Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4295772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4295847Z layer_outputs = layer_module( 2025-09-07T07:50:15.4296080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4296171Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4296456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4296544Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4296829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4296903Z self_outputs = self.self( 2025-09-07T07:50:15.4297193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4297311Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4297676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4297830Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4297833Z 2025-09-07T07:50:15.4297945Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4298301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4298409Z layer_outputs = layer_module( 2025-09-07T07:50:15.4298644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4298725Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4299057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4299133Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4299460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4299532Z self_outputs = self.self( 2025-09-07T07:50:15.4299832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4299964Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4300348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4300516Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4300519Z 2025-09-07T07:50:15.4300629Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4301004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4301086Z layer_outputs = layer_module( 2025-09-07T07:50:15.4301312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4301398Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4301680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4301767Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4302051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4302122Z self_outputs = self.self( 2025-09-07T07:50:15.4302408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4302598Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4302602Z 2025-09-07T07:50:15.4302693Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4302772Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4302853Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4302938Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4303040Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4303398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4303473Z layer_outputs = layer_module( 2025-09-07T07:50:15.4303703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4303785Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4304075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4304158Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4304430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4304506Z self_outputs = self.self( 2025-09-07T07:50:15.4304775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 509, in forward 2025-09-07T07:50:15.4304892Z query_vectors = self.query(hidden_states) 2025-09-07T07:50:15.4304896Z 2025-09-07T07:50:15.4304979Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4305081Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4305446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4305517Z layer_outputs = layer_module( 2025-09-07T07:50:15.4305782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4305864Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4306138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4306224Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4306507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4306586Z self_outputs = self.self( 2025-09-07T07:50:15.4306867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4306973Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4307325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4307512Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4307515Z 2025-09-07T07:50:15.4307628Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4307982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4308065Z layer_outputs = layer_module( 2025-09-07T07:50:15.4308292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4308373Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4308661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4308738Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4309043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4309119Z self_outputs = self.self( 2025-09-07T07:50:15.4309413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4309537Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4309884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4310074Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4310077Z 2025-09-07T07:50:15.4310181Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4310539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4310612Z layer_outputs = layer_module( 2025-09-07T07:50:15.4310838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4310925Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4311197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4311310Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4311584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4311660Z self_outputs = self.self( 2025-09-07T07:50:15.4311930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 524, in forward 2025-09-07T07:50:15.4312059Z attn_scores = self._sliding_chunks_query_key_matmul( 2025-09-07T07:50:15.4312406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 796, in _sliding_chunks_query_key_matmul 2025-09-07T07:50:15.4312588Z diagonal_chunked_attention_scores = torch.einsum("bcxd,bcyd->bcxy", (query, key)) # multiply 2025-09-07T07:50:15.4312591Z 2025-09-07T07:50:15.4312678Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4312759Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4312861Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4313211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4313281Z layer_outputs = layer_module( 2025-09-07T07:50:15.4313508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4313592Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4313878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4313952Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4314229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4314309Z self_outputs = self.self( 2025-09-07T07:50:15.4314583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 541, in forward 2025-09-07T07:50:15.4314663Z attn_scores += diagonal_mask 2025-09-07T07:50:15.4314666Z 2025-09-07T07:50:15.4314766Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4315119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4315189Z layer_outputs = layer_module( 2025-09-07T07:50:15.4315408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4315494Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4315771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4315856Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4316139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4316210Z self_outputs = self.self( 2025-09-07T07:50:15.4316499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 579, in forward 2025-09-07T07:50:15.4316580Z attn_probs = nn.functional.softmax( 2025-09-07T07:50:15.4316586Z 2025-09-07T07:50:15.4316674Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4316777Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4317135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4317209Z layer_outputs = layer_module( 2025-09-07T07:50:15.4317477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4317565Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4317847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4317927Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4318238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4318308Z self_outputs = self.self( 2025-09-07T07:50:15.4318587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4318702Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4319050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 863, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4319225Z padded_value = nn.functional.pad(value, (0, 0, window_overlap, window_overlap), value=-1) 2025-09-07T07:50:15.4319424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4319524Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4319528Z 2025-09-07T07:50:15.4319629Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4319987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4320060Z layer_outputs = layer_module( 2025-09-07T07:50:15.4320291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4320370Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4320654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4320739Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4321020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4321099Z self_outputs = self.self( 2025-09-07T07:50:15.4321381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4321507Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4321859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 876, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4321998Z chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) 2025-09-07T07:50:15.4322332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 699, in _pad_and_diagonalize 2025-09-07T07:50:15.4322427Z chunked_hidden_states = nn.functional.pad( 2025-09-07T07:50:15.4322628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/nn/functional.py", line 5294, in pad 2025-09-07T07:50:15.4322726Z return torch._C._nn.pad(input, pad, mode, value) 2025-09-07T07:50:15.4322730Z 2025-09-07T07:50:15.4322840Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4323192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4323266Z layer_outputs = layer_module( 2025-09-07T07:50:15.4323499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4323579Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4323911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4323995Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4324290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4324373Z self_outputs = self.self( 2025-09-07T07:50:15.4324702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4324830Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4325181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4325343Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4325349Z 2025-09-07T07:50:15.4325454Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4325815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4325899Z layer_outputs = layer_module( 2025-09-07T07:50:15.4326133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4326228Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4326526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4326616Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4326985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4327073Z self_outputs = self.self( 2025-09-07T07:50:15.4327391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 613, in forward 2025-09-07T07:50:15.4327519Z attn_output = self._sliding_chunks_matmul_attn_probs_value( 2025-09-07T07:50:15.4327914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 878, in _sliding_chunks_matmul_attn_probs_value 2025-09-07T07:50:15.4328083Z context = torch.einsum("bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) 2025-09-07T07:50:15.4328087Z 2025-09-07T07:50:15.4328202Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:50:15.4328595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1259, in torch_dynamo_resume_in_forward_at_1244 2025-09-07T07:50:15.4328670Z layer_outputs = layer_module( 2025-09-07T07:50:15.4328907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:50:15.4328990Z return super().__call__(*args, **kwargs) 2025-09-07T07:50:15.4329285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1199, in forward 2025-09-07T07:50:15.4329363Z self_attn_outputs = self.attention( 2025-09-07T07:50:15.4329651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1135, in forward 2025-09-07T07:50:15.4329733Z self_outputs = self.self( 2025-09-07T07:50:15.4330017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 618, in forward 2025-09-07T07:50:15.4330213Z attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() 2025-09-07T07:50:15.4330216Z 2025-09-07T07:50:15.4330299Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4330421Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4330510Z cudagraph partition due to non gpu ops 2025-09-07T07:50:15.4330591Z cudagraph partition due to non gpu ops 2025-09-07T07:51:06.9085902Z Autotune Choices Stats: 2025-09-07T07:51:06.9089377Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_85", "best_time": 6.018082500077071} 2025-09-07T07:51:06.9089945Z AUTOTUNE linear_unary(1024x768, 50265x768, 50265) 2025-09-07T07:51:06.9094215Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:51:06.9094996Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:51:06.9095310Z cpp_CppMicroGemmAMX_85 6.0181 ms 100.0% 2025-09-07T07:51:06.9095595Z _linear_pointwise 12.4708 ms 48.3% 2025-09-07T07:51:06.9096035Z SingleProcess AUTOTUNE benchmarking takes 1.3830 seconds and 1.3734 seconds precompiling for 2 choices 2025-09-07T07:51:07.0137590Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:07.0138236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1716, in torch_dynamo_resume_in_forward_at_1703 2025-09-07T07:51:07.0138803Z prediction_scores = self.lm_head(sequence_output) 2025-09-07T07:51:07.0139271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1333, in forward 2025-09-07T07:51:07.0139706Z x = self.dense(features) 2025-09-07T07:51:07.0139832Z 2025-09-07T07:51:07.0139933Z cudagraph partition due to non gpu ops 2025-09-07T07:51:07.0140172Z cudagraph partition due to non gpu ops 2025-09-07T07:51:07.0140411Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:07.0140941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/longformer/modeling_longformer.py", line 1723, in torch_dynamo_resume_in_forward_at_1703 2025-09-07T07:51:07.0141557Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T07:51:07.0141809Z 2025-09-07T07:51:08.8361968Z Compilation time (from dynamo_timed): 100.66371854 2025-09-07T07:51:08.8551632Z pass 2025-09-07T07:51:08.8552164Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:51:08.8553521Z TIMING: _recursive_pre_grad_passes:0.51247 _recursive_joint_graph_passes:0.97954 inductor_compile:82.02097 backend_compile:95.6188 gc:0.0059 entire_frame_compile:100.66372 _recursive_post_grad_passes:0.95824 async_compile.wait:4.45896 code_gen:59.95363 linear_unary_template_precompiling:5.70268 linear_unary_template_autotuning:2.33654 bmm_template_precompiling:1.45672 bmm_template_autotuning:0.49876 total_wall_time:100.66372 2025-09-07T07:51:08.8555071Z STATS: call_* op count: 1789 | FakeTensorMode.__torch_dispatch__:76657 | FakeTensor.__torch_dispatch__:8510 | ProxyTorchDispatchMode.__torch_dispatch__:20282 2025-09-07T07:51:08.8555669Z Dynamo produced 5 graphs covering 1789 ops with 4 graph breaks (1 unique) 2025-09-07T07:51:12.5960223Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:51:12.5961023Z import pynvml # type: ignore[import] 2025-09-07T07:51:15.2208631Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:51:15.2209595Z from pkg_resources import resource_filename 2025-09-07T07:51:15.8688099Z 2025-09-07T07:51:18.3732116Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:51:18.3732699Z loading model: 0it [00:02, ?it/s] 2025-09-07T07:51:18.3732976Z cpu eval BartForCausalLM 2025-09-07T07:51:20.0203929Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:51:20.3560728Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:51:20.6923642Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:51:39.2207514Z Autotune Choices Stats: 2025-09-07T07:51:39.2212476Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.11494199998196564} 2025-09-07T07:51:39.2217308Z AUTOTUNE linear_unary(1024x1024, 1024x1024, 1024) 2025-09-07T07:51:39.2218174Z strides: [1024, 1], [1, 0], [1] 2025-09-07T07:51:39.2218595Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:51:39.2218991Z cpp_CppMicroGemmAMX_0 0.1149 ms 100.0% 2025-09-07T07:51:39.2219306Z _linear_pointwise 0.1777 ms 64.7% 2025-09-07T07:51:39.2219852Z SingleProcess AUTOTUNE benchmarking takes 0.2977 seconds and 1.3743 seconds precompiling for 2 choices 2025-09-07T07:51:41.5102821Z Autotune Choices Stats: 2025-09-07T07:51:41.5103364Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 0.7233829999222507} 2025-09-07T07:51:41.5113828Z AUTOTUNE linear_unary(1024x1024, 4096x1024, 4096) 2025-09-07T07:51:41.5114247Z strides: [1024, 1], [1, 0], [1] 2025-09-07T07:51:41.5114554Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:51:41.5114839Z _linear_pointwise 0.7234 ms 100.0% 2025-09-07T07:51:41.5115209Z cpp_CppMicroGemmAMX_4 0.7863 ms 92.0% 2025-09-07T07:51:41.5115706Z SingleProcess AUTOTUNE benchmarking takes 0.3667 seconds and 1.5288 seconds precompiling for 2 choices 2025-09-07T07:51:43.3451691Z Autotune Choices Stats: 2025-09-07T07:51:43.3452161Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.547562000065227} 2025-09-07T07:51:43.3464175Z AUTOTUNE linear_unary(1024x4096, 1024x4096, 1024) 2025-09-07T07:51:43.3464633Z strides: [4096, 1], [1, 0], [1] 2025-09-07T07:51:43.3464930Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:51:43.3465304Z cpp_CppMicroGemmAMX_5 0.5476 ms 100.0% 2025-09-07T07:51:43.3465561Z _linear_pointwise 0.6503 ms 84.2% 2025-09-07T07:51:43.3465949Z SingleProcess AUTOTUNE benchmarking takes 0.3642 seconds and 1.3946 seconds precompiling for 2 choices 2025-09-07T07:51:51.3533349Z Autotune Choices Stats: 2025-09-07T07:51:51.3533890Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_72", "best_time": 6.692954999948597} 2025-09-07T07:51:51.3540217Z AUTOTUNE linear_unary(1024x1024, 50265x1024) 2025-09-07T07:51:51.3540602Z strides: [1024, 1], [1, 0] 2025-09-07T07:51:51.3540898Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T07:51:51.3541210Z cpp_CppMicroGemmAMX_72 6.6930 ms 100.0% 2025-09-07T07:51:51.3541520Z _linear_pointwise 15.4906 ms 43.2% 2025-09-07T07:51:51.3542042Z SingleProcess AUTOTUNE benchmarking takes 1.4199 seconds and 1.3764 seconds precompiling for 2 choices 2025-09-07T07:51:51.9438921Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9440926Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9441230Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9441492Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9441708Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9441913Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9442132Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9442373Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9442578Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9442773Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9442978Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9443637Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9443887Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9444130Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9444731Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9444958Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9445348Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9445580Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9445795Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9446101Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9446519Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9447092Z return mod(**inputs) 2025-09-07T07:51:51.9447551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9448013Z outputs = self.model.decoder( 2025-09-07T07:51:51.9448451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9448884Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9449279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9449686Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9450113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9450562Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9451017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9451459Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9451947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9452483Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9452687Z 2025-09-07T07:51:51.9452811Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9453221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9453596Z return mod(**inputs) 2025-09-07T07:51:51.9453988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9454408Z outputs = self.model.decoder( 2025-09-07T07:51:51.9454815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9455231Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9455612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9456001Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9456376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9456788Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9457187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9457586Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9458024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9458476Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9458643Z 2025-09-07T07:51:51.9458726Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9458939Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9459150Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9459351Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9459564Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9459773Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9460073Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9460280Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9460504Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9460715Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9460938Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9461195Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9461588Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9461988Z return mod(**inputs) 2025-09-07T07:51:51.9462384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9462805Z outputs = self.model.decoder( 2025-09-07T07:51:51.9463185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9463569Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9463914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9464277Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9464659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9465056Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9465455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9465850Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9466288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9466765Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9466949Z 2025-09-07T07:51:51.9467057Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9467422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9467754Z return mod(**inputs) 2025-09-07T07:51:51.9468117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9468505Z outputs = self.model.decoder( 2025-09-07T07:51:51.9468891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9469281Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9469623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9469978Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9470351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9470754Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9471151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9471547Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9471981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9472429Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9472598Z 2025-09-07T07:51:51.9472677Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9472885Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9473091Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9473289Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9473491Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9473738Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9473948Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9474138Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9474337Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9474539Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9474745Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9474973Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9475362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9475689Z return mod(**inputs) 2025-09-07T07:51:51.9476054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9476451Z outputs = self.model.decoder( 2025-09-07T07:51:51.9476828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9477220Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9477575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9477941Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9478322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9478761Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9479184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9479602Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9480072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9480588Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9480797Z 2025-09-07T07:51:51.9480912Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9481306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9481665Z return mod(**inputs) 2025-09-07T07:51:51.9482064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9482482Z outputs = self.model.decoder( 2025-09-07T07:51:51.9482897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9483316Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9483702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9484097Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9484498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9484920Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9485345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9485784Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9486259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9486766Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9487162Z 2025-09-07T07:51:51.9487256Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9487491Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9487719Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9487935Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9488160Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9488434Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9488655Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9488868Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9489089Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9489311Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9489512Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9489733Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9490138Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9490474Z return mod(**inputs) 2025-09-07T07:51:51.9490865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9491274Z outputs = self.model.decoder( 2025-09-07T07:51:51.9491760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9492268Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9492653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9493045Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9493451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9493897Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9494333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9494766Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9495245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9495759Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9495968Z 2025-09-07T07:51:51.9496082Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9496465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9496819Z return mod(**inputs) 2025-09-07T07:51:51.9497198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9497636Z outputs = self.model.decoder( 2025-09-07T07:51:51.9498002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9498367Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9498700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9499041Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9499414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9499812Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9500200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9500592Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9501010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9501452Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9501615Z 2025-09-07T07:51:51.9501693Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9501902Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9502098Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9502296Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9502495Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9502756Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9502954Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9503166Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9503363Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9503560Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9503758Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9503978Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9504363Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9504685Z return mod(**inputs) 2025-09-07T07:51:51.9505085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9505454Z outputs = self.model.decoder( 2025-09-07T07:51:51.9505821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9506194Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9506531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9506884Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9507250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9507646Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9508040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9508433Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9508857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9509317Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9509500Z 2025-09-07T07:51:51.9509602Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9509952Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9510268Z return mod(**inputs) 2025-09-07T07:51:51.9510618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9511009Z outputs = self.model.decoder( 2025-09-07T07:51:51.9511373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9511741Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9512076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9512415Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9512783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9513176Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9513562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9513943Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9514375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9514821Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9514977Z 2025-09-07T07:51:51.9515064Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9515275Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9515475Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9515678Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9515880Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9516122Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9516317Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9516522Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9516722Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9516927Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9517125Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9517367Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9517741Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9518063Z return mod(**inputs) 2025-09-07T07:51:51.9518405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9518789Z outputs = self.model.decoder( 2025-09-07T07:51:51.9519159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9519539Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9519886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9520238Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9520617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9521020Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9521421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9521821Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9522252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9522721Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9522909Z 2025-09-07T07:51:51.9523012Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9523372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9523695Z return mod(**inputs) 2025-09-07T07:51:51.9524045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9524425Z outputs = self.model.decoder( 2025-09-07T07:51:51.9524803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9525187Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9525532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9525900Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9526290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9526709Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9527210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9527637Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9528139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9528593Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9528754Z 2025-09-07T07:51:51.9528845Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9529060Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9529264Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9529472Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9529681Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9529924Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9530120Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9530326Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9530533Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9530736Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9530932Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9531165Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9531551Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9531877Z return mod(**inputs) 2025-09-07T07:51:51.9532230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9532611Z outputs = self.model.decoder( 2025-09-07T07:51:51.9532986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9533370Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9533718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9534074Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9534454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9534858Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9535257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9535653Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9536169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9536658Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9536852Z 2025-09-07T07:51:51.9536969Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9537337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9537668Z return mod(**inputs) 2025-09-07T07:51:51.9538025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9538407Z outputs = self.model.decoder( 2025-09-07T07:51:51.9538788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9539169Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9539519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9539891Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9540286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9540705Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9541107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9541523Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9541976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9542464Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9542628Z 2025-09-07T07:51:51.9542719Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9542930Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9543143Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9543356Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9543569Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9543827Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9544042Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9544256Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9544476Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9544680Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9544895Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9545267Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9545732Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9546074Z return mod(**inputs) 2025-09-07T07:51:51.9546439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9546839Z outputs = self.model.decoder( 2025-09-07T07:51:51.9547227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9547619Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9547972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9548350Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9548746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9549164Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9549579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9549983Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9550425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9550871Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9551046Z 2025-09-07T07:51:51.9551153Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9551491Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9551792Z return mod(**inputs) 2025-09-07T07:51:51.9552132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9552497Z outputs = self.model.decoder( 2025-09-07T07:51:51.9552853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9553203Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9553536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9553885Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9554258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9554655Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9555039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9555430Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9555859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9556300Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9556454Z 2025-09-07T07:51:51.9556538Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9556739Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9556944Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9557146Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9557345Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9558215Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9558414Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9558613Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9558809Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9559002Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9559201Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9559426Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9559805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9560120Z return mod(**inputs) 2025-09-07T07:51:51.9560472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9560850Z outputs = self.model.decoder( 2025-09-07T07:51:51.9561221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9561631Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9561982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9562355Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9562749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9563163Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9563568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9563983Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9564433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9564922Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9565112Z 2025-09-07T07:51:51.9565234Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9565615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9565966Z return mod(**inputs) 2025-09-07T07:51:51.9566352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9566769Z outputs = self.model.decoder( 2025-09-07T07:51:51.9567257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9567677Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9568059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9568469Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9568887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9569332Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9569771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9570212Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9570758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9571264Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9571437Z 2025-09-07T07:51:51.9571525Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9571758Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9571987Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9572213Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9572431Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9572732Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9573036Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9573268Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9573490Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9573708Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9573933Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9574195Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9574656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9575005Z return mod(**inputs) 2025-09-07T07:51:51.9575401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9575820Z outputs = self.model.decoder( 2025-09-07T07:51:51.9576222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9576598Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9576954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9577307Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9577690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9578087Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9578476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9578868Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9579295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9579759Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9579938Z 2025-09-07T07:51:51.9580047Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9580388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9580705Z return mod(**inputs) 2025-09-07T07:51:51.9581073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9581463Z outputs = self.model.decoder( 2025-09-07T07:51:51.9581849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9582245Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9582591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9582955Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9583326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9583712Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9584103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9584496Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9584926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9585371Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9585525Z 2025-09-07T07:51:51.9585605Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9585819Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9586023Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9586228Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9586459Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9586663Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9586867Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9587068Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9587262Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9587465Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9587665Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9587895Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9588343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9588669Z return mod(**inputs) 2025-09-07T07:51:51.9589020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9589394Z outputs = self.model.decoder( 2025-09-07T07:51:51.9589764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9590130Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9590476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9590827Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9591200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9591583Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9591977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9592365Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9592792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9593250Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9593428Z 2025-09-07T07:51:51.9593532Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9593883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9594213Z return mod(**inputs) 2025-09-07T07:51:51.9594568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9594938Z outputs = self.model.decoder( 2025-09-07T07:51:51.9595297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9595669Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9596017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9596376Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9596751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9597157Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9597546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9597935Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9598370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9598801Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9598965Z 2025-09-07T07:51:51.9599044Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9599254Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9599458Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9599656Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9599907Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9600108Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9600312Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9600515Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9600714Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9600920Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9601128Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9601361Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9601747Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9602077Z return mod(**inputs) 2025-09-07T07:51:51.9602438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9602820Z outputs = self.model.decoder( 2025-09-07T07:51:51.9603188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9603570Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9603919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9604296Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9604692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9605105Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9605519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9605935Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9606385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:51:51.9606938Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:51:51.9607144Z 2025-09-07T07:51:51.9607263Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9607670Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9608028Z return mod(**inputs) 2025-09-07T07:51:51.9608390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1901, in forward 2025-09-07T07:51:51.9608766Z outputs = self.model.decoder( 2025-09-07T07:51:51.9609148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:51:51.9609530Z layer_outputs = decoder_layer( 2025-09-07T07:51:51.9609885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:51:51.9610251Z return super().__call__(*args, **kwargs) 2025-09-07T07:51:51.9610632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:51:51.9611038Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:51:51.9611437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:51:51.9611838Z attn_output, attn_weights = attention_interface( 2025-09-07T07:51:51.9612278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:51:51.9612726Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:51:51.9612895Z 2025-09-07T07:51:51.9612974Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9613186Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9613395Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9613595Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9613840Z cudagraph partition due to non gpu ops 2025-09-07T07:51:51.9614075Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:51:51.9614432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:51:51.9614756Z return mod(**inputs) 2025-09-07T07:51:51.9615110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1923, in forward 2025-09-07T07:51:51.9615630Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T07:51:51.9615836Z 2025-09-07T07:51:59.9865452Z Compilation time (from dynamo_timed): 37.669761386 2025-09-07T07:52:00.0063835Z pass 2025-09-07T07:52:00.0064344Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:52:00.0069404Z TIMING: _recursive_pre_grad_passes:0.0367 _recursive_joint_graph_passes:0.38944 _recursive_post_grad_passes:0.07239 linear_unary_template_precompiling:5.6856 linear_unary_template_autotuning:2.44263 async_compile.wait:0.78499 code_gen:7.32887 inductor_compile:30.33632 backend_compile:35.43987 gc:0.00143 entire_frame_compile:37.66976 total_wall_time:37.66976 2025-09-07T07:52:00.0070842Z STATS: call_* op count: 374 | FakeTensorMode.__torch_dispatch__:27604 | FakeTensor.__torch_dispatch__:3043 | ProxyTorchDispatchMode.__torch_dispatch__:7489 2025-09-07T07:52:00.0071413Z Dynamo produced 1 graphs covering 374 ops with 0 graph breaks (0 unique) 2025-09-07T07:52:02.8288827Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:52:02.8289613Z import pynvml # type: ignore[import] 2025-09-07T07:52:05.4652813Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:52:05.4653762Z from pkg_resources import resource_filename 2025-09-07T07:52:06.0858804Z 2025-09-07T07:52:10.6858616Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:52:10.6858979Z loading model: 0it [00:04, ?it/s] 2025-09-07T07:52:10.6859234Z cpu eval BartForConditionalGeneration 2025-09-07T07:52:14.2790379Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:52:14.9319680Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:52:15.5284212Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:53:01.3544585Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3545433Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3545713Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3545954Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3546198Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3546435Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3546677Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3546898Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3547146Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3547393Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3547642Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3547856Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3548075Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3548286Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3548491Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3548700Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3548911Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3549510Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3549720Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3549976Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3550384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3550754Z return mod(**inputs) 2025-09-07T07:53:01.3551198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3551750Z outputs = self.model( 2025-09-07T07:53:01.3552344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3552899Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3553360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3553805Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3554306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3554810Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3555252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3555690Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3556107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3556522Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3556982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3557476Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3557681Z 2025-09-07T07:53:01.3557803Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3558172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3558507Z return mod(**inputs) 2025-09-07T07:53:01.3558925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3559340Z outputs = self.model( 2025-09-07T07:53:01.3559741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3560149Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3560555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3560969Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3561353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3561764Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3562173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3562609Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3563135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3563593Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3564087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3564607Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3564786Z 2025-09-07T07:53:01.3564883Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3565109Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3565335Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3565612Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3565835Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3566052Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3566274Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3566495Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3566717Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3567155Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3567384Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3567705Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3568106Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3568448Z return mod(**inputs) 2025-09-07T07:53:01.3568813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3569196Z outputs = self.model( 2025-09-07T07:53:01.3569558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3569985Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3570393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3570832Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3571228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3571622Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3572034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3572478Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3572907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3573346Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3573828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3574339Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3574544Z 2025-09-07T07:53:01.3574660Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3575057Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3575414Z return mod(**inputs) 2025-09-07T07:53:01.3575805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3576209Z outputs = self.model( 2025-09-07T07:53:01.3576600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3577040Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3577446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3577859Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3578241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3578634Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3579040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3579471Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3579899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3580332Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3580813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3581337Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3581519Z 2025-09-07T07:53:01.3581608Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3581842Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3582068Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3582285Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3582544Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3582768Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3582989Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3583204Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3583425Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3583645Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3583866Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3584118Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3584509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3584864Z return mod(**inputs) 2025-09-07T07:53:01.3585251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3585659Z outputs = self.model( 2025-09-07T07:53:01.3586041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3586461Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3586844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3587246Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3587618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3588004Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3588399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3588805Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3589224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3589652Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3590133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3590646Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3590845Z 2025-09-07T07:53:01.3590970Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3591359Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3591708Z return mod(**inputs) 2025-09-07T07:53:01.3592100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3592507Z outputs = self.model( 2025-09-07T07:53:01.3592892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3593300Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3593699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3594088Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3594444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3594814Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3595201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3595679Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3596104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3596538Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3597010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3597518Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3597692Z 2025-09-07T07:53:01.3597776Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3597995Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3598207Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3598411Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3598622Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3598835Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3599047Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3599260Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3599481Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3599701Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3599922Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3600170Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3600564Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3600920Z return mod(**inputs) 2025-09-07T07:53:01.3601313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3601726Z outputs = self.model( 2025-09-07T07:53:01.3602122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3602553Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3602971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3603394Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3603787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3604180Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3604609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3605053Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3605491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3605929Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3606420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3607057Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3607268Z 2025-09-07T07:53:01.3607397Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3607802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3608160Z return mod(**inputs) 2025-09-07T07:53:01.3608570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3608996Z outputs = self.model( 2025-09-07T07:53:01.3609401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3609852Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3610263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3610741Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3611131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3611531Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3611947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3612411Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3612889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3613335Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3613843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3614337Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3614525Z 2025-09-07T07:53:01.3614615Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3614854Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3615088Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3615309Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3615539Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3615766Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3615994Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3616222Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3616444Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3616670Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3616895Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3617155Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3617546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3617927Z return mod(**inputs) 2025-09-07T07:53:01.3618320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3618736Z outputs = self.model( 2025-09-07T07:53:01.3619117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3619530Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3619933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3620340Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3620731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3621129Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3621549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3621978Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3622408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3622848Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3623320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3623834Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3624043Z 2025-09-07T07:53:01.3624160Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3624566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3624909Z return mod(**inputs) 2025-09-07T07:53:01.3625309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3625762Z outputs = self.model( 2025-09-07T07:53:01.3626153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3626566Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3626976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3627407Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3627845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3628243Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3628657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3629085Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3629520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3629957Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3630438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3630924Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3631106Z 2025-09-07T07:53:01.3631193Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3631427Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3631654Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3631878Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3632102Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3632314Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3632526Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3632734Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3632940Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3633149Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3633360Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3633598Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3633960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3634296Z return mod(**inputs) 2025-09-07T07:53:01.3634670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3635057Z outputs = self.model( 2025-09-07T07:53:01.3635422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3635817Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3636205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3636597Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3636959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3637324Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3637717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3638126Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3638535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3638946Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3639391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3639880Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3640107Z 2025-09-07T07:53:01.3640218Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3640598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3640926Z return mod(**inputs) 2025-09-07T07:53:01.3641298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3641710Z outputs = self.model( 2025-09-07T07:53:01.3642134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3642549Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3642950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3643355Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3643740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3644131Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3644532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3644956Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3645557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3645998Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3646475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3647025Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3647213Z 2025-09-07T07:53:01.3647302Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3647536Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3647772Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3648001Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3648234Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3648444Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3648657Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3648881Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3649102Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3649329Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3649557Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3649816Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3650207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3650571Z return mod(**inputs) 2025-09-07T07:53:01.3650976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3651397Z outputs = self.model( 2025-09-07T07:53:01.3651786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3652221Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3652638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3653067Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3653460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3653858Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3654274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3654666Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3655055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3655559Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3656002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3656493Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3656691Z 2025-09-07T07:53:01.3656806Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3657238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3657570Z return mod(**inputs) 2025-09-07T07:53:01.3657942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3658322Z outputs = self.model( 2025-09-07T07:53:01.3658681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3659082Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3659481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3659901Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3660281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3660811Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3661252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3661650Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3662057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3662469Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3662924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3663381Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3663554Z 2025-09-07T07:53:01.3663638Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3663865Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3664074Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3664277Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3664476Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3664683Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3664889Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3665100Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3665313Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3665532Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3665754Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3666008Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3666392Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3666744Z return mod(**inputs) 2025-09-07T07:53:01.3667136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3667539Z outputs = self.model( 2025-09-07T07:53:01.3667899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3668277Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3668647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3669031Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3669389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3669804Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3670193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3670596Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3671000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3671444Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3671877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3672355Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3672551Z 2025-09-07T07:53:01.3672661Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3673032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3673367Z return mod(**inputs) 2025-09-07T07:53:01.3673740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3674129Z outputs = self.model( 2025-09-07T07:53:01.3674503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3674903Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3675285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3675679Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3676043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3676420Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3676817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3677224Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3677630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3678045Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3678501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3678960Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3679136Z 2025-09-07T07:53:01.3679221Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3679444Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3679667Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3679883Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3680093Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3680315Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3680535Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3680751Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3680959Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3681176Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3681393Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3681634Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3682006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3682345Z return mod(**inputs) 2025-09-07T07:53:01.3682720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3683114Z outputs = self.model( 2025-09-07T07:53:01.3683482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3683909Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3684295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3684692Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3685099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3685497Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3685963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3686395Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3686888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3687341Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3687815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3688341Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3688535Z 2025-09-07T07:53:01.3688643Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3689018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3689410Z return mod(**inputs) 2025-09-07T07:53:01.3689806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3690278Z outputs = self.model( 2025-09-07T07:53:01.3690679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3691102Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3691495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3691917Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3692298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3692689Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3693110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3693541Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3693972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3694410Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3694879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3695368Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3695543Z 2025-09-07T07:53:01.3695628Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3695856Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3696081Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3696302Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3696514Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3696736Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3696959Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3697179Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3697392Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3697610Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3697830Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3698083Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3698463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3698838Z return mod(**inputs) 2025-09-07T07:53:01.3699199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3699576Z outputs = self.model( 2025-09-07T07:53:01.3699924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3700306Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3700709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3701092Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3701454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3701830Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3702220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3702628Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3703045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3703452Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3703890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3704376Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3704566Z 2025-09-07T07:53:01.3704674Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3705039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3705368Z return mod(**inputs) 2025-09-07T07:53:01.3705735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3706146Z outputs = self.model( 2025-09-07T07:53:01.3706509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3706909Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3707317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3707712Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3708077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3708450Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3708855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3709244Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3709640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3710041Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3710482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3710934Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3711096Z 2025-09-07T07:53:01.3711180Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3711397Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3711613Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3711823Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3712025Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3712236Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3712447Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3712688Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3712886Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3713091Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3713295Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3713620Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3714024Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3714592Z return mod(**inputs) 2025-09-07T07:53:01.3715101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3715547Z outputs = self.model( 2025-09-07T07:53:01.3715967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3716435Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3716873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3717343Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3717727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3718154Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3718627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3719087Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3719535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3719998Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3720513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3721046Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3721305Z 2025-09-07T07:53:01.3721431Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3721852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3722250Z return mod(**inputs) 2025-09-07T07:53:01.3722661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3723122Z outputs = self.model( 2025-09-07T07:53:01.3723560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3724031Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3724456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3724932Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3725383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3725823Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3726317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3726787Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3727393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3727933Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3728491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3729047Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3729256Z 2025-09-07T07:53:01.3743171Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3743729Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3743959Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3744175Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3744378Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3744591Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3744804Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3745153Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3745372Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3745738Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3745961Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3746222Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3746603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3746957Z return mod(**inputs) 2025-09-07T07:53:01.3747359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3747760Z outputs = self.model( 2025-09-07T07:53:01.3748127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3748519Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3748907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3749294Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3749657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3750021Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3750414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3750827Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3751240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3751668Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3752111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3752591Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3752785Z 2025-09-07T07:53:01.3752901Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3753270Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3753598Z return mod(**inputs) 2025-09-07T07:53:01.3753952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3754334Z outputs = self.model( 2025-09-07T07:53:01.3754697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1270, in forward 2025-09-07T07:53:01.3755084Z encoder_outputs = self.encoder( 2025-09-07T07:53:01.3755451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 869, in forward 2025-09-07T07:53:01.3755831Z layer_outputs = encoder_layer( 2025-09-07T07:53:01.3756184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3756552Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3756936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 312, in forward 2025-09-07T07:53:01.3757326Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:53:01.3757725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3758173Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3758611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3759061Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3759225Z 2025-09-07T07:53:01.3759311Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3759528Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3759739Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3759980Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3760181Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3760391Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3760596Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3760802Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3760998Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3761204Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3761616Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3761852Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3762211Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3762547Z return mod(**inputs) 2025-09-07T07:53:01.3762921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3763316Z outputs = self.model( 2025-09-07T07:53:01.3763690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3764079Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3764465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3764855Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3765211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3765592Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3766009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3766450Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3766984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3767439Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3767915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3768406Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3768602Z 2025-09-07T07:53:01.3768713Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3769088Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3769430Z return mod(**inputs) 2025-09-07T07:53:01.3769781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3770169Z outputs = self.model( 2025-09-07T07:53:01.3770537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3770934Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3771309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3771715Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3772062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3772432Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3772887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3773296Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3773710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3774127Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3774613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3775085Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3775251Z 2025-09-07T07:53:01.3775334Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3775551Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3775763Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3775968Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3776173Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3776391Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3776600Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3776820Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3777054Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3777424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3777753Z return mod(**inputs) 2025-09-07T07:53:01.3778129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3778520Z outputs = self.model( 2025-09-07T07:53:01.3778885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3779284Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3779673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3780069Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3780431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3780796Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3781190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3781619Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3782043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3782454Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3782911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3783401Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3783588Z 2025-09-07T07:53:01.3783703Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3784075Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3784405Z return mod(**inputs) 2025-09-07T07:53:01.3784774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3785166Z outputs = self.model( 2025-09-07T07:53:01.3785538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3785932Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3786315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3786707Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3787117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3787489Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3787881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3788309Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3788767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3789181Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3789632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3790087Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3790262Z 2025-09-07T07:53:01.3790347Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3790565Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3790778Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3790981Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3791189Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3791397Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3791605Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3791812Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3792017Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3792226Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3792434Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3792671Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3793037Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3793361Z return mod(**inputs) 2025-09-07T07:53:01.3793725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3794104Z outputs = self.model( 2025-09-07T07:53:01.3794455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3794841Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3795218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3795601Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3795950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3796303Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3796687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3797105Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3797518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3797920Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3798371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3798870Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3799061Z 2025-09-07T07:53:01.3799169Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3799528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3799846Z return mod(**inputs) 2025-09-07T07:53:01.3800202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3800615Z outputs = self.model( 2025-09-07T07:53:01.3800981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3801377Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3801759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3802191Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3802601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3802997Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3803410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3803855Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3804295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3804740Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3805224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3805713Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3805897Z 2025-09-07T07:53:01.3805986Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3806223Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3806457Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3806688Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3807002Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3807237Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3807468Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3807695Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3807950Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3808350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3808679Z return mod(**inputs) 2025-09-07T07:53:01.3809038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3809407Z outputs = self.model( 2025-09-07T07:53:01.3809764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3810149Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3810526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3810907Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3811244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3811603Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3811991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3812406Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3812806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3813207Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3813650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3814119Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3814300Z 2025-09-07T07:53:01.3814413Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3814770Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3815123Z return mod(**inputs) 2025-09-07T07:53:01.3815476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3815850Z outputs = self.model( 2025-09-07T07:53:01.3816205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3816579Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3816981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3817362Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3817708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3818065Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3818452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3818866Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3819286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3819703Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3820131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3820582Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3820749Z 2025-09-07T07:53:01.3820839Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3821046Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3821249Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3821444Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3821645Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3821847Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3822052Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3822249Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3822451Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3822661Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3822856Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3823077Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3823432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3823756Z return mod(**inputs) 2025-09-07T07:53:01.3824119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3824478Z outputs = self.model( 2025-09-07T07:53:01.3824831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3825210Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3825581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3825960Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3826300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3826661Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3827049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3827441Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3827823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3828212Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3828638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3829133Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3829308Z 2025-09-07T07:53:01.3829417Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3829759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3830078Z return mod(**inputs) 2025-09-07T07:53:01.3830466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3830837Z outputs = self.model( 2025-09-07T07:53:01.3831184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3831558Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3831920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3832286Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3832622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3832972Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3833346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3833738Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3834124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3834513Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3834938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3835374Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3835532Z 2025-09-07T07:53:01.3835614Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3835812Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3836016Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3836218Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3836417Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3836610Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3836810Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3837011Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3837239Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3837580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3837892Z return mod(**inputs) 2025-09-07T07:53:01.3838242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3838614Z outputs = self.model( 2025-09-07T07:53:01.3838958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3839331Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3839697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3840066Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3840407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3840756Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3841133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3841533Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3841931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3842358Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3842783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3843251Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3843437Z 2025-09-07T07:53:01.3843542Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3843927Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3844245Z return mod(**inputs) 2025-09-07T07:53:01.3844603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3844979Z outputs = self.model( 2025-09-07T07:53:01.3845496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3845898Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3846296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3846710Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3847152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3847558Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3847978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3848400Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3848799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3849193Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3849635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3850053Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3850213Z 2025-09-07T07:53:01.3850290Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3850494Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3850696Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3850898Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3851093Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3851298Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3851500Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3851701Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3851893Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3852091Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3852291Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3852519Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3852861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3853175Z return mod(**inputs) 2025-09-07T07:53:01.3853518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3853888Z outputs = self.model( 2025-09-07T07:53:01.3854232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3854604Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3854971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3855350Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3855701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3856129Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3856511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3856921Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3857323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3857779Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3858213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3858724Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3858914Z 2025-09-07T07:53:01.3859020Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3859385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3859706Z return mod(**inputs) 2025-09-07T07:53:01.3860062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3860437Z outputs = self.model( 2025-09-07T07:53:01.3860797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3861179Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3861545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3861922Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3862265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3862624Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3862996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3863399Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3863797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3864197Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3864634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3865071Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3865235Z 2025-09-07T07:53:01.3865314Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3865523Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3865732Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3865937Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3866140Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3866342Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3866547Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3866751Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3866976Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3867332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3867654Z return mod(**inputs) 2025-09-07T07:53:01.3868015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3868384Z outputs = self.model( 2025-09-07T07:53:01.3868740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3869123Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3869497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3869923Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3870256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3870613Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3870996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3871445Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3871837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3872226Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3872650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3873106Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3873282Z 2025-09-07T07:53:01.3873391Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3873731Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3874045Z return mod(**inputs) 2025-09-07T07:53:01.3874388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3874755Z outputs = self.model( 2025-09-07T07:53:01.3875105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3875468Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3875831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3876200Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3876541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3876886Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3877255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3877651Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3878050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3878490Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3878918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3879374Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3879535Z 2025-09-07T07:53:01.3879614Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3879826Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3880029Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3880222Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3880419Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3880618Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3880815Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3881007Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3881211Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3881420Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3881627Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3881855Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3882211Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3882530Z return mod(**inputs) 2025-09-07T07:53:01.3882883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3883286Z outputs = self.model( 2025-09-07T07:53:01.3883641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3884019Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3884391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3884777Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3885172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3885566Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3885975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3886411Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3886911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3887365Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3887846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3888372Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3888583Z 2025-09-07T07:53:01.3888712Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3889090Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3889443Z return mod(**inputs) 2025-09-07T07:53:01.3889827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3890238Z outputs = self.model( 2025-09-07T07:53:01.3890629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3891038Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3891444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3891868Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3892242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3892631Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3893040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3893475Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3893907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3894343Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3894807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3895290Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3895468Z 2025-09-07T07:53:01.3895556Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3895786Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3895995Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3896196Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3896400Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3896609Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3896822Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3897022Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3897261Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3897623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3897992Z return mod(**inputs) 2025-09-07T07:53:01.3898360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3898730Z outputs = self.model( 2025-09-07T07:53:01.3899089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3899454Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3899841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3900201Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3900548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3900906Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3901285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3901688Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3902093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3902493Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3902922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3903377Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3903551Z 2025-09-07T07:53:01.3903653Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3904003Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3904318Z return mod(**inputs) 2025-09-07T07:53:01.3904668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3905030Z outputs = self.model( 2025-09-07T07:53:01.3905378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3905756Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3906126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3906504Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3906841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3907200Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3907577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3907989Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3908395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3908793Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3909216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3909651Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3909806Z 2025-09-07T07:53:01.3909894Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3910105Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3910304Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3910511Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3910718Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3910923Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3911156Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3911359Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3911566Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3911769Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3911973Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3912202Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3912556Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3912909Z return mod(**inputs) 2025-09-07T07:53:01.3913268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3913648Z outputs = self.model( 2025-09-07T07:53:01.3914012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3914399Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3914779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3915156Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3915507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3915875Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3916261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3916664Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3917069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3917472Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3917916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3918394Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3918577Z 2025-09-07T07:53:01.3918684Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3919047Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3919377Z return mod(**inputs) 2025-09-07T07:53:01.3919744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3920128Z outputs = self.model( 2025-09-07T07:53:01.3920485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3920870Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3921251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3921651Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3922004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3922380Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3922786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3923240Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3923660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3923764Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3924058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3924181Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3924222Z 2025-09-07T07:53:01.3924305Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3924393Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3924470Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3924556Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3924634Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3924711Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3924793Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3924870Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3925009Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3925232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3925303Z return mod(**inputs) 2025-09-07T07:53:01.3925578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3925653Z outputs = self.model( 2025-09-07T07:53:01.3925929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3926011Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3926276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3926362Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3926598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3926697Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3927032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3927161Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3927438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3927548Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3927869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3928012Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3928016Z 2025-09-07T07:53:01.3928141Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3928369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3928436Z return mod(**inputs) 2025-09-07T07:53:01.3928693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3928762Z outputs = self.model( 2025-09-07T07:53:01.3929014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3929093Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3929339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3929421Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3929647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3929734Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3929973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3930078Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3930323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3930418Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3930710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3930867Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3930870Z 2025-09-07T07:53:01.3930959Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931036Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931113Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931197Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931271Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931386Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931463Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931538Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931619Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931695Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931769Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3931879Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3932078Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3932153Z return mod(**inputs) 2025-09-07T07:53:01.3932396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3932474Z outputs = self.model( 2025-09-07T07:53:01.3932722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3932796Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3933040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3933111Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3933341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3933423Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3933657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3933760Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3933991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3934091Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3934366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3934493Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3934504Z 2025-09-07T07:53:01.3934607Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3934806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3934885Z return mod(**inputs) 2025-09-07T07:53:01.3935136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3935210Z outputs = self.model( 2025-09-07T07:53:01.3935443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3935516Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3935758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3935827Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3936042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3936119Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3936351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3936490Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3936724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3936822Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3937099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3937248Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3937252Z 2025-09-07T07:53:01.3937333Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937409Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937490Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937563Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937644Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937721Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937796Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937878Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3937978Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3938173Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3938238Z return mod(**inputs) 2025-09-07T07:53:01.3938479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3938555Z outputs = self.model( 2025-09-07T07:53:01.3938794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3938874Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3939112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3939185Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3939403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3939482Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3939723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3939828Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3940065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3940165Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3940441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3940572Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3940578Z 2025-09-07T07:53:01.3940678Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3940873Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3940938Z return mod(**inputs) 2025-09-07T07:53:01.3941178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3941253Z outputs = self.model( 2025-09-07T07:53:01.3941496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3941573Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3941808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3941878Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3942098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3942209Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3942447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3942551Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3942788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3942951Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3943227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3943337Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3943340Z 2025-09-07T07:53:01.3943419Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3943501Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3943579Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3943652Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3943731Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3943804Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3943884Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3943958Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3944033Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3944114Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3944189Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3944296Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3944487Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3944552Z return mod(**inputs) 2025-09-07T07:53:01.3944797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3944869Z outputs = self.model( 2025-09-07T07:53:01.3945232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3945312Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3945552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3945634Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3945848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3945939Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3946174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3946270Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3946513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3946607Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3946886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3947011Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3947014Z 2025-09-07T07:53:01.3947124Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3947321Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3947387Z return mod(**inputs) 2025-09-07T07:53:01.3947632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3947700Z outputs = self.model( 2025-09-07T07:53:01.3947942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3948075Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3948309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3948387Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3948598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3948681Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3948957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3949055Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3949296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3949389Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3949679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3949786Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3949789Z 2025-09-07T07:53:01.3949879Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3949958Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3950034Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3950122Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3950197Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3950277Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3950351Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3950426Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3950533Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3950731Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3950805Z return mod(**inputs) 2025-09-07T07:53:01.3951045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3951114Z outputs = self.model( 2025-09-07T07:53:01.3951363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3951437Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3951695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3951765Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3951974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3952057Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3952295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3952414Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3952652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3952752Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3953034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3953164Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3953168Z 2025-09-07T07:53:01.3953278Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3953476Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3953550Z return mod(**inputs) 2025-09-07T07:53:01.3953793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3953919Z outputs = self.model( 2025-09-07T07:53:01.3954171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3954246Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3954491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3954595Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3954813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3954904Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3955141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3955254Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3955497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3955597Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3955878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3955982Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3955986Z 2025-09-07T07:53:01.3956075Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956153Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956236Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956310Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956384Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956465Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956539Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956623Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956697Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956771Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956851Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3956953Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3957154Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3957220Z return mod(**inputs) 2025-09-07T07:53:01.3957466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3957542Z outputs = self.model( 2025-09-07T07:53:01.3957784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3957867Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3958109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3958183Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3958404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3958483Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3958727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3958826Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3959074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3959169Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3959450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3959625Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3959628Z 2025-09-07T07:53:01.3959731Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3959932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3959999Z return mod(**inputs) 2025-09-07T07:53:01.3960244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3960348Z outputs = self.model( 2025-09-07T07:53:01.3960593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3960676Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3960922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3960993Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3961220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3961300Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3961546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3961645Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3961895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3961990Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3962273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3962388Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3962391Z 2025-09-07T07:53:01.3962469Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3962556Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3962632Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3962704Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3962787Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3962862Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3962943Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3963018Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3963123Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3963329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3963395Z return mod(**inputs) 2025-09-07T07:53:01.3963650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3963718Z outputs = self.model( 2025-09-07T07:53:01.3963964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3964048Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3964291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3964371Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3964586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3964667Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3964919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3965027Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3965274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3965399Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3965704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3965843Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3965847Z 2025-09-07T07:53:01.3965958Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3966179Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3966285Z return mod(**inputs) 2025-09-07T07:53:01.3966570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3966645Z outputs = self.model( 2025-09-07T07:53:01.3966982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3967076Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3967343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3967432Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3967674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3967771Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3968052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3968161Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3968409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3968505Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3968816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3968934Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3968938Z 2025-09-07T07:53:01.3969024Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969118Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969201Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969292Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969374Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969458Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969550Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969632Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969722Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969803Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3969883Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3970001Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3970215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3970294Z return mod(**inputs) 2025-09-07T07:53:01.3970556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3970631Z outputs = self.model( 2025-09-07T07:53:01.3970900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3970981Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3971250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3971329Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3971566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3971658Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3971955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3972072Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3972329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3972438Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3972784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3972926Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3972930Z 2025-09-07T07:53:01.3973050Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3973265Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3973349Z return mod(**inputs) 2025-09-07T07:53:01.3973614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3973688Z outputs = self.model( 2025-09-07T07:53:01.3973958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3974037Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3974307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3974386Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3974627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3974715Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3974975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3975089Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3975350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3975458Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3975764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3975875Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3975878Z 2025-09-07T07:53:01.3975966Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976043Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976126Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976201Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976274Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976366Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976445Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976526Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3976627Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3976823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3976898Z return mod(**inputs) 2025-09-07T07:53:01.3977140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3977217Z outputs = self.model( 2025-09-07T07:53:01.3977458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3977532Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3977780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3977850Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3978109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3978188Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3978436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3978544Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3978819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3978931Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3979224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3979362Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3979365Z 2025-09-07T07:53:01.3979473Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3979674Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3979751Z return mod(**inputs) 2025-09-07T07:53:01.3980003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3980080Z outputs = self.model( 2025-09-07T07:53:01.3980332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3980416Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3980709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3980778Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3980995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3981076Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3981315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3981422Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3981655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3981755Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3982038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3982151Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3982154Z 2025-09-07T07:53:01.3982232Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982316Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982405Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982477Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982557Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982630Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982703Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982785Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982857Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3982936Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3983009Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3983111Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3983308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3983371Z return mod(**inputs) 2025-09-07T07:53:01.3983616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3983721Z outputs = self.model( 2025-09-07T07:53:01.3983959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3984039Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3984277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3984354Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3984594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3984683Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3984919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3985015Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3985258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3985352Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3985637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3985770Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3985774Z 2025-09-07T07:53:01.3985878Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3986092Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3986171Z return mod(**inputs) 2025-09-07T07:53:01.3986422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3986491Z outputs = self.model( 2025-09-07T07:53:01.3986742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3986820Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3987065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3987146Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3987374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3987460Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3987697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3987793Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3988036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3988127Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3988412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3988517Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3988520Z 2025-09-07T07:53:01.3988607Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3988683Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3988757Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3988837Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3988913Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3988986Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3989065Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3989138Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3989245Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3989436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3989535Z return mod(**inputs) 2025-09-07T07:53:01.3989785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3989853Z outputs = self.model( 2025-09-07T07:53:01.3990101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3990177Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3990455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3990529Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3990744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3990831Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3991070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3991189Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3991429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3991521Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3991812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3991944Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3991947Z 2025-09-07T07:53:01.3992056Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3992253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3992329Z return mod(**inputs) 2025-09-07T07:53:01.3992581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3992650Z outputs = self.model( 2025-09-07T07:53:01.3992894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3992966Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3993208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3993278Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3993494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3993583Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3993824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.3993938Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.3994184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3994277Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3994566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.3994672Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.3994676Z 2025-09-07T07:53:01.3994762Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3994844Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3994927Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995002Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995076Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995158Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995235Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995309Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995423Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995498Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995579Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.3995681Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3995874Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3995949Z return mod(**inputs) 2025-09-07T07:53:01.3996228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3996304Z outputs = self.model( 2025-09-07T07:53:01.3996546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3996627Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3996866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3996940Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3997164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3997243Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.3997489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.3997586Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.3997825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.3997926Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.3998205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.3998340Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.3998346Z 2025-09-07T07:53:01.3998447Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.3998650Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.3998716Z return mod(**inputs) 2025-09-07T07:53:01.3998957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.3999032Z outputs = self.model( 2025-09-07T07:53:01.3999277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.3999359Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.3999601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.3999673Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.3999896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.3999977Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.4000225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 413, in forward 2025-09-07T07:53:01.4000321Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:53:01.4000558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.4000663Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.4000945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.4001058Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.4001061Z 2025-09-07T07:53:01.4001141Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4001598Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4001676Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4001751Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4001837Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4001914Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4001997Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4002074Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4002177Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.4002412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.4002480Z return mod(**inputs) 2025-09-07T07:53:01.4002732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.4002801Z outputs = self.model( 2025-09-07T07:53:01.4003043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.4003128Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.4003380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.4003459Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.4003685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.4003768Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.4004026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.4004136Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.4004390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.4004486Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.4004782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:53:01.4004924Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:01.4004927Z 2025-09-07T07:53:01.4005033Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.4005242Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.4005310Z return mod(**inputs) 2025-09-07T07:53:01.4005590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1471, in forward 2025-09-07T07:53:01.4005664Z outputs = self.model( 2025-09-07T07:53:01.4005928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1288, in forward 2025-09-07T07:53:01.4006015Z decoder_outputs = self.decoder( 2025-09-07T07:53:01.4006278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1115, in forward 2025-09-07T07:53:01.4006366Z layer_outputs = decoder_layer( 2025-09-07T07:53:01.4006607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:01.4006696Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:01.4007064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 430, in forward 2025-09-07T07:53:01.4007195Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:53:01.4007482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 254, in forward 2025-09-07T07:53:01.4007589Z attn_output, attn_weights = attention_interface( 2025-09-07T07:53:01.4007930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:53:01.4008098Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:53:01.4008102Z 2025-09-07T07:53:01.4008182Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4008269Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4008346Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4008431Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4008507Z cudagraph partition due to non gpu ops 2025-09-07T07:53:01.4008611Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:01.4008852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:01.4008922Z return mod(**inputs) 2025-09-07T07:53:01.4009174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1497, in forward 2025-09-07T07:53:01.4009341Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T07:53:01.4009348Z 2025-09-07T07:53:20.1876831Z Compilation time (from dynamo_timed): 62.636799021 2025-09-07T07:53:20.2082785Z pass 2025-09-07T07:53:20.2087629Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:53:20.2093403Z TIMING: _recursive_pre_grad_passes:0.08818 _recursive_joint_graph_passes:0.83516 _recursive_post_grad_passes:0.15772 linear_unary_template_precompiling:0.03449 async_compile.wait:0.83463 code_gen:17.62575 inductor_compile:43.5284 backend_compile:56.87525 gc:0.00073 entire_frame_compile:62.6368 total_wall_time:62.6368 2025-09-07T07:53:20.2094592Z STATS: call_* op count: 982 | FakeTensorMode.__torch_dispatch__:70188 | FakeTensor.__torch_dispatch__:7547 | ProxyTorchDispatchMode.__torch_dispatch__:19216 2025-09-07T07:53:20.2095092Z Dynamo produced 1 graphs covering 982 ops with 0 graph breaks (0 unique) 2025-09-07T07:53:23.7768969Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:53:23.7770625Z import pynvml # type: ignore[import] 2025-09-07T07:53:26.4402609Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:53:26.4404358Z from pkg_resources import resource_filename 2025-09-07T07:53:27.0938418Z 2025-09-07T07:53:28.2509400Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:53:28.2515737Z loading model: 0it [00:01, ?it/s] 2025-09-07T07:53:28.2521376Z cpu eval BertForMaskedLM 2025-09-07T07:53:28.7314396Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:53:28.8626608Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:53:28.9921186Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:53:47.3941683Z Autotune Choices Stats: 2025-09-07T07:53:47.3942357Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 0.08973900003184099} 2025-09-07T07:53:47.3950824Z AUTOTUNE linear_unary(512x768, 768x768, 768) 2025-09-07T07:53:47.3951179Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:53:47.3951448Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:53:47.3951740Z _linear_pointwise 0.0897 ms 100.0% 2025-09-07T07:53:47.3951987Z cpp_CppMicroGemmAMX_0 0.1122 ms 80.0% 2025-09-07T07:53:47.3952404Z SingleProcess AUTOTUNE benchmarking takes 0.2699 seconds and 1.3557 seconds precompiling for 2 choices 2025-09-07T07:53:49.5304309Z Autotune Choices Stats: 2025-09-07T07:53:49.5305255Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 0.49123150006380456} 2025-09-07T07:53:49.5309998Z AUTOTUNE linear_unary(512x768, 3072x768, 3072) 2025-09-07T07:53:49.5310283Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:53:49.5310547Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:53:49.5310828Z _linear_pointwise 0.4912 ms 100.0% 2025-09-07T07:53:49.5311070Z cpp_CppMicroGemmAMX_4 0.6151 ms 79.9% 2025-09-07T07:53:49.5311595Z SingleProcess AUTOTUNE benchmarking takes 0.2990 seconds and 1.4911 seconds precompiling for 2 choices 2025-09-07T07:53:51.2503745Z Autotune Choices Stats: 2025-09-07T07:53:51.2504215Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.12474000004658592} 2025-09-07T07:53:51.2515853Z AUTOTUNE linear_unary(512x3072, 768x3072, 768) 2025-09-07T07:53:51.2516160Z strides: [3072, 1], [1, 0], [1] 2025-09-07T07:53:51.2516433Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:53:51.2516750Z cpp_CppMicroGemmAMX_5 0.1247 ms 100.0% 2025-09-07T07:53:51.2516991Z _linear_pointwise 0.2108 ms 59.2% 2025-09-07T07:53:51.2517377Z SingleProcess AUTOTUNE benchmarking takes 0.2938 seconds and 1.3503 seconds precompiling for 2 choices 2025-09-07T07:53:58.4795402Z Autotune Choices Stats: 2025-09-07T07:53:58.4796131Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 2.366303500025424} 2025-09-07T07:53:58.4802586Z AUTOTUNE linear_unary(512x768, 30522x768, 30522) 2025-09-07T07:53:58.4803008Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:53:58.4803385Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:53:58.4803793Z _linear_pointwise 2.3663 ms 100.0% 2025-09-07T07:53:58.4804151Z cpp_CppMicroGemmAMX_73 2.4736 ms 95.7% 2025-09-07T07:53:58.4804782Z SingleProcess AUTOTUNE benchmarking takes 0.6089 seconds and 1.3700 seconds precompiling for 2 choices 2025-09-07T07:53:58.9000557Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9006195Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9012481Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9018366Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9024587Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9029533Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9033775Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9038284Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9040241Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9040490Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9040707Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9040918Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9041130Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9041342Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9041555Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9041762Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9042024Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9042243Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9042468Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9042730Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9043151Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9043554Z return mod(**inputs) 2025-09-07T07:53:58.9043991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9044422Z outputs = self.bert( 2025-09-07T07:53:58.9044799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9045406Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9045838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9046636Z layer_outputs = layer_module( 2025-09-07T07:53:58.9047092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9047503Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9047941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9048377Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9048871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9049245Z return func(*args, **kwargs) 2025-09-07T07:53:58.9049614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9049988Z self_outputs = self.self( 2025-09-07T07:53:58.9050352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9050727Z return func(*args, **kwargs) 2025-09-07T07:53:58.9051094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9051527Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9051726Z 2025-09-07T07:53:58.9051809Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9052027Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9052248Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9052452Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9052653Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9052856Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9053060Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9053266Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9053464Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9053674Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9053881Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9054085Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9054316Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9054683Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9055015Z return mod(**inputs) 2025-09-07T07:53:58.9055383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9055753Z outputs = self.bert( 2025-09-07T07:53:58.9056115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9056497Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9056875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9057267Z layer_outputs = layer_module( 2025-09-07T07:53:58.9057626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9057989Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9058373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9058759Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9059137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9059508Z return func(*args, **kwargs) 2025-09-07T07:53:58.9059874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9060249Z self_outputs = self.self( 2025-09-07T07:53:58.9060608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9061018Z return func(*args, **kwargs) 2025-09-07T07:53:58.9061384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9061822Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9062006Z 2025-09-07T07:53:58.9062094Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9062343Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9062548Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9062754Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9062960Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9063163Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9063362Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9063568Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9063776Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9063976Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9064220Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9064427Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9064666Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9065030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9065351Z return mod(**inputs) 2025-09-07T07:53:58.9065716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9066088Z outputs = self.bert( 2025-09-07T07:53:58.9066448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9066823Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9067199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9067577Z layer_outputs = layer_module( 2025-09-07T07:53:58.9067927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9068287Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9068659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9069053Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9069418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9069777Z return func(*args, **kwargs) 2025-09-07T07:53:58.9070133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9070493Z self_outputs = self.self( 2025-09-07T07:53:58.9070841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9071196Z return func(*args, **kwargs) 2025-09-07T07:53:58.9071552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9071966Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9072153Z 2025-09-07T07:53:58.9072231Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9072440Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9072646Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9072848Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9073040Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9073244Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9073448Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9073653Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9073900Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9074108Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9074313Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9074515Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9074745Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9075107Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9075430Z return mod(**inputs) 2025-09-07T07:53:58.9075836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9076192Z outputs = self.bert( 2025-09-07T07:53:58.9076537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9076906Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9077280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9077660Z layer_outputs = layer_module( 2025-09-07T07:53:58.9077987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9078334Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9078705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9079088Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9079500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9079869Z return func(*args, **kwargs) 2025-09-07T07:53:58.9080240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9080671Z self_outputs = self.self( 2025-09-07T07:53:58.9081048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9081484Z return func(*args, **kwargs) 2025-09-07T07:53:58.9081865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9082341Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9082541Z 2025-09-07T07:53:58.9082637Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9082873Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9083094Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9083323Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9083549Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9083776Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9083996Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9084223Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9084452Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9084676Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9084893Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9085120Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9085400Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9085794Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9086141Z return mod(**inputs) 2025-09-07T07:53:58.9086550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9087174Z outputs = self.bert( 2025-09-07T07:53:58.9087582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9088016Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9088418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9088883Z layer_outputs = layer_module( 2025-09-07T07:53:58.9089258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9089647Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9090050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9090530Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9090947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9091350Z return func(*args, **kwargs) 2025-09-07T07:53:58.9091743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9092151Z self_outputs = self.self( 2025-09-07T07:53:58.9092544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9092953Z return func(*args, **kwargs) 2025-09-07T07:53:58.9093345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9093811Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9094009Z 2025-09-07T07:53:58.9094094Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9094329Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9094555Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9094776Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9095000Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9095199Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9095399Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9095598Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9095793Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9095993Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9096195Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9096394Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9096615Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9096968Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9097284Z return mod(**inputs) 2025-09-07T07:53:58.9097634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9097992Z outputs = self.bert( 2025-09-07T07:53:58.9098338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9098710Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9099073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9099440Z layer_outputs = layer_module( 2025-09-07T07:53:58.9099768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9100122Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9100493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9100874Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9101240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9101592Z return func(*args, **kwargs) 2025-09-07T07:53:58.9101953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9102326Z self_outputs = self.self( 2025-09-07T07:53:58.9102719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9103079Z return func(*args, **kwargs) 2025-09-07T07:53:58.9103439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9103869Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9104043Z 2025-09-07T07:53:58.9104129Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9104369Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9104570Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9104775Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9104979Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9105184Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9105382Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9105588Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9105814Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9106015Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9106209Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9106411Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9106640Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9106995Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9107312Z return mod(**inputs) 2025-09-07T07:53:58.9107701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9108073Z outputs = self.bert( 2025-09-07T07:53:58.9108422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9108803Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9109175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9109572Z layer_outputs = layer_module( 2025-09-07T07:53:58.9109920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9110283Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9110694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9111077Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9111453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9111826Z return func(*args, **kwargs) 2025-09-07T07:53:58.9112192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9112602Z self_outputs = self.self( 2025-09-07T07:53:58.9112967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9113339Z return func(*args, **kwargs) 2025-09-07T07:53:58.9113744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9114177Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9114358Z 2025-09-07T07:53:58.9114439Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9114654Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9114864Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9115070Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9115267Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9115470Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9115676Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9115882Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9116120Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9116324Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9116531Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9116735Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9116962Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9117322Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9117644Z return mod(**inputs) 2025-09-07T07:53:58.9118041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9118413Z outputs = self.bert( 2025-09-07T07:53:58.9118769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9119149Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9119524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9119904Z layer_outputs = layer_module( 2025-09-07T07:53:58.9120242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9120628Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9121006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9121441Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9121818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9122183Z return func(*args, **kwargs) 2025-09-07T07:53:58.9122548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9122922Z self_outputs = self.self( 2025-09-07T07:53:58.9123282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9123642Z return func(*args, **kwargs) 2025-09-07T07:53:58.9124005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9124437Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9124617Z 2025-09-07T07:53:58.9124704Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9124919Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9125121Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9125329Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9125534Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9125738Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9125935Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9126142Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9126357Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9126567Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9126773Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9127075Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9127323Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9127709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9128059Z return mod(**inputs) 2025-09-07T07:53:58.9128447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9128834Z outputs = self.bert( 2025-09-07T07:53:58.9129202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9129602Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9130032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9130419Z layer_outputs = layer_module( 2025-09-07T07:53:58.9130773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9131144Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9131527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9131963Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9132355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9132736Z return func(*args, **kwargs) 2025-09-07T07:53:58.9133111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9133495Z self_outputs = self.self( 2025-09-07T07:53:58.9133865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9134246Z return func(*args, **kwargs) 2025-09-07T07:53:58.9134623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9135069Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9135255Z 2025-09-07T07:53:58.9135341Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9135564Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9135782Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9135992Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9136198Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9136398Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9136597Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9136797Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9136987Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9137188Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9137388Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9137587Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9137809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9138158Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9138475Z return mod(**inputs) 2025-09-07T07:53:58.9138829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9139187Z outputs = self.bert( 2025-09-07T07:53:58.9139535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9139906Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9140273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9140645Z layer_outputs = layer_module( 2025-09-07T07:53:58.9140976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9141334Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9141706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9142088Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9142498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9142864Z return func(*args, **kwargs) 2025-09-07T07:53:58.9143238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9143664Z self_outputs = self.self( 2025-09-07T07:53:58.9144013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9144419Z return func(*args, **kwargs) 2025-09-07T07:53:58.9144784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9145338Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9145515Z 2025-09-07T07:53:58.9145698Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9145911Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9146109Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9146316Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9146521Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9146727Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9146920Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9147124Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9147329Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9147534Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9147727Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9147930Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9148161Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9148515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9148829Z return mod(**inputs) 2025-09-07T07:53:58.9149180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9149547Z outputs = self.bert( 2025-09-07T07:53:58.9149892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9150261Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9150617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9150982Z layer_outputs = layer_module( 2025-09-07T07:53:58.9151319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9151668Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9152029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9152407Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9152778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9153136Z return func(*args, **kwargs) 2025-09-07T07:53:58.9153491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9153853Z self_outputs = self.self( 2025-09-07T07:53:58.9154203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9154560Z return func(*args, **kwargs) 2025-09-07T07:53:58.9154925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9155336Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9155507Z 2025-09-07T07:53:58.9155585Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9155787Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9155988Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9156189Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9156387Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9156587Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9156786Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9157737Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9157939Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9158139Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9158341Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9158542Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9158763Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9159109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9159459Z return mod(**inputs) 2025-09-07T07:53:58.9159807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1308, in forward 2025-09-07T07:53:58.9160156Z outputs = self.bert( 2025-09-07T07:53:58.9160504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:53:58.9160873Z encoder_outputs = self.encoder( 2025-09-07T07:53:58.9161238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:53:58.9161606Z layer_outputs = layer_module( 2025-09-07T07:53:58.9161934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:53:58.9162282Z return super().__call__(*args, **kwargs) 2025-09-07T07:53:58.9162658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:53:58.9163044Z self_attention_outputs = self.attention( 2025-09-07T07:53:58.9163421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9163781Z return func(*args, **kwargs) 2025-09-07T07:53:58.9164142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:53:58.9164518Z self_outputs = self.self( 2025-09-07T07:53:58.9164870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:53:58.9165226Z return func(*args, **kwargs) 2025-09-07T07:53:58.9165587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:53:58.9166015Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:53:58.9166194Z 2025-09-07T07:53:58.9166282Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9166493Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9166692Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9166977Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9167192Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9167407Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9167630Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9167863Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9168100Z cudagraph partition due to non gpu ops 2025-09-07T07:53:58.9168337Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:53:58.9168700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:53:58.9169019Z return mod(**inputs) 2025-09-07T07:53:58.9169378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1328, in forward 2025-09-07T07:53:58.9169864Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T07:53:58.9170098Z 2025-09-07T07:54:02.7392050Z Compilation time (from dynamo_timed): 32.487030904 2025-09-07T07:54:02.7426660Z pass 2025-09-07T07:54:02.7427188Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:54:02.7428289Z TIMING: _recursive_pre_grad_passes:0.03358 _recursive_joint_graph_passes:0.39138 _recursive_post_grad_passes:0.072 linear_unary_template_precompiling:5.57898 linear_unary_template_autotuning:1.4652 async_compile.wait:0.76525 code_gen:3.30929 inductor_compile:24.85708 backend_compile:29.81305 gc:0.00072 entire_frame_compile:32.48703 total_wall_time:32.48703 2025-09-07T07:54:02.7429838Z STATS: call_* op count: 291 | FakeTensorMode.__torch_dispatch__:26884 | FakeTensor.__torch_dispatch__:2935 | ProxyTorchDispatchMode.__torch_dispatch__:7211 2025-09-07T07:54:02.7431643Z Dynamo produced 1 graphs covering 291 ops with 0 graph breaks (0 unique) 2025-09-07T07:54:05.5308422Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:54:05.5309236Z import pynvml # type: ignore[import] 2025-09-07T07:54:08.1804251Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:54:08.1805288Z from pkg_resources import resource_filename 2025-09-07T07:54:08.8682105Z 2025-09-07T07:54:09.8142272Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:54:09.8142636Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:54:09.8142964Z cpu eval BertForQuestionAnswering 2025-09-07T07:54:10.1933489Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:54:10.3338833Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:54:10.4477133Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:54:34.4394599Z Autotune Choices Stats: 2025-09-07T07:54:34.4395082Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_72", "best_time": 0.006146000032458687} 2025-09-07T07:54:34.4402793Z AUTOTUNE linear_unary(512x768, 2x768, 2) 2025-09-07T07:54:34.4403129Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:54:34.4403404Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:54:34.4403705Z cpp_CppMicroGemmAMX_72 0.0061 ms 100.0% 2025-09-07T07:54:34.4403954Z _linear_pointwise 0.0368 ms 16.7% 2025-09-07T07:54:34.4404399Z SingleProcess AUTOTUNE benchmarking takes 0.2596 seconds and 1.3515 seconds precompiling for 2 choices 2025-09-07T07:54:34.8728746Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8733579Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8733886Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8734128Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8734360Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8734626Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8734851Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8735079Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8735307Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8735530Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8735755Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8735989Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8736260Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8736712Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8737115Z return mod(**inputs) 2025-09-07T07:54:34.8737564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1781, in forward 2025-09-07T07:54:34.8738012Z logits = self.qa_outputs(sequence_output) 2025-09-07T07:54:34.8738192Z 2025-09-07T07:54:34.8738274Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8738870Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8739080Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8739298Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8739511Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8739722Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8739940Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8740178Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8740638Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8740980Z return mod(**inputs) 2025-09-07T07:54:34.8741357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8741742Z outputs = self.bert( 2025-09-07T07:54:34.8742116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8742550Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8742968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8743365Z layer_outputs = layer_module( 2025-09-07T07:54:34.8743737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8744136Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8744568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8745000Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8745703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8746097Z return func(*args, **kwargs) 2025-09-07T07:54:34.8746474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8746878Z self_outputs = self.self( 2025-09-07T07:54:34.8747254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8747627Z return func(*args, **kwargs) 2025-09-07T07:54:34.8748003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8748462Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8748657Z 2025-09-07T07:54:34.8748751Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8748960Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8749176Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8749392Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8749630Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8749836Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8750048Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8750255Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8750465Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8750668Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8750931Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8751142Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8751383Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8751757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8752090Z return mod(**inputs) 2025-09-07T07:54:34.8752461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8752864Z outputs = self.bert( 2025-09-07T07:54:34.8753245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8753745Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8754128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8754516Z layer_outputs = layer_module( 2025-09-07T07:54:34.8754872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8755273Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8755758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8756187Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8756600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8757008Z return func(*args, **kwargs) 2025-09-07T07:54:34.8757443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8757874Z self_outputs = self.self( 2025-09-07T07:54:34.8758267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8758668Z return func(*args, **kwargs) 2025-09-07T07:54:34.8759067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8759552Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8759751Z 2025-09-07T07:54:34.8759844Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8760063Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8760286Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8760509Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8760728Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8760948Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8761172Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8761394Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8761616Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8761836Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8762050Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8762268Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8762525Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8762921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8763267Z return mod(**inputs) 2025-09-07T07:54:34.8763656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8764060Z outputs = self.bert( 2025-09-07T07:54:34.8764441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8764850Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8765251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8765665Z layer_outputs = layer_module( 2025-09-07T07:54:34.8766043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8766435Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8767040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8767475Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8767903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8768324Z return func(*args, **kwargs) 2025-09-07T07:54:34.8768765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8769183Z self_outputs = self.self( 2025-09-07T07:54:34.8769574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8769989Z return func(*args, **kwargs) 2025-09-07T07:54:34.8770385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8770891Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8771096Z 2025-09-07T07:54:34.8771183Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8771411Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8771623Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8771824Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8772040Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8772246Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8772450Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8772654Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8772850Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8773052Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8773256Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8773460Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8773687Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8774051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8774378Z return mod(**inputs) 2025-09-07T07:54:34.8774735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8775102Z outputs = self.bert( 2025-09-07T07:54:34.8775453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8775834Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8776204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8776582Z layer_outputs = layer_module( 2025-09-07T07:54:34.8776921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8777283Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8777665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8778052Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8778422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8778789Z return func(*args, **kwargs) 2025-09-07T07:54:34.8779155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8779529Z self_outputs = self.self( 2025-09-07T07:54:34.8779885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8780248Z return func(*args, **kwargs) 2025-09-07T07:54:34.8780609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8781044Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8781230Z 2025-09-07T07:54:34.8781318Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8781528Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8781743Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8781954Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8782163Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8782413Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8782620Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8782824Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8783028Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8783233Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8783433Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8783636Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8783868Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8784286Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8784611Z return mod(**inputs) 2025-09-07T07:54:34.8784967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8785342Z outputs = self.bert( 2025-09-07T07:54:34.8785694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8786067Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8786439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8786845Z layer_outputs = layer_module( 2025-09-07T07:54:34.8787193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8787553Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8787929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8788321Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8788690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8789062Z return func(*args, **kwargs) 2025-09-07T07:54:34.8789420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8789794Z self_outputs = self.self( 2025-09-07T07:54:34.8790150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8790517Z return func(*args, **kwargs) 2025-09-07T07:54:34.8790880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8791309Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8791497Z 2025-09-07T07:54:34.8791576Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8791787Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8791996Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8792194Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8792411Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8792614Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8792814Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8793013Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8793211Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8793415Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8793630Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8793829Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8794048Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8794402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8794721Z return mod(**inputs) 2025-09-07T07:54:34.8795071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8795429Z outputs = self.bert( 2025-09-07T07:54:34.8795781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8796203Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8796582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8796972Z layer_outputs = layer_module( 2025-09-07T07:54:34.8797311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8797700Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8798089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8798468Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8798828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8799185Z return func(*args, **kwargs) 2025-09-07T07:54:34.8799541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8799914Z self_outputs = self.self( 2025-09-07T07:54:34.8800276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8800646Z return func(*args, **kwargs) 2025-09-07T07:54:34.8801016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8801456Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8801645Z 2025-09-07T07:54:34.8801738Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8801957Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8802180Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8802402Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8802623Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8802839Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8803060Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8803280Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8803499Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8803721Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8803989Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8804209Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8804469Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8804860Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8805205Z return mod(**inputs) 2025-09-07T07:54:34.8805595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8806005Z outputs = self.bert( 2025-09-07T07:54:34.8806386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8806862Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8807285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8807722Z layer_outputs = layer_module( 2025-09-07T07:54:34.8808096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8808514Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8808923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8809343Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8809756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8810170Z return func(*args, **kwargs) 2025-09-07T07:54:34.8810596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8811067Z self_outputs = self.self( 2025-09-07T07:54:34.8811453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8811863Z return func(*args, **kwargs) 2025-09-07T07:54:34.8812295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8812753Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8812954Z 2025-09-07T07:54:34.8813040Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8813267Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8813494Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8813710Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8813932Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8814157Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8814379Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8814646Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8814859Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8815078Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8815296Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8815516Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8815766Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8816160Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8816514Z return mod(**inputs) 2025-09-07T07:54:34.8816914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8817280Z outputs = self.bert( 2025-09-07T07:54:34.8817629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8818015Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8818385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8818756Z layer_outputs = layer_module( 2025-09-07T07:54:34.8819092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8819452Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8819834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8820228Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8820630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8821030Z return func(*args, **kwargs) 2025-09-07T07:54:34.8821426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8821832Z self_outputs = self.self( 2025-09-07T07:54:34.8822221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8822584Z return func(*args, **kwargs) 2025-09-07T07:54:34.8822953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8823384Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8823564Z 2025-09-07T07:54:34.8823651Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8823855Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8824062Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8824286Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8825656Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8825857Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8826068Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8826276Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8826484Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8826754Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8826960Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8827165Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8827439Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8827808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8828140Z return mod(**inputs) 2025-09-07T07:54:34.8828505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8828879Z outputs = self.bert( 2025-09-07T07:54:34.8829234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8829623Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8829998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8830381Z layer_outputs = layer_module( 2025-09-07T07:54:34.8830730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8831108Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8831503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8831904Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8832296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8832667Z return func(*args, **kwargs) 2025-09-07T07:54:34.8833034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8833411Z self_outputs = self.self( 2025-09-07T07:54:34.8833780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8834152Z return func(*args, **kwargs) 2025-09-07T07:54:34.8834532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8834980Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8835165Z 2025-09-07T07:54:34.8835256Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8835476Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8835684Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8835894Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8836107Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8836316Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8836520Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8836730Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8836940Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8837149Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8837352Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8837563Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8837808Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8838182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8838510Z return mod(**inputs) 2025-09-07T07:54:34.8838881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8839268Z outputs = self.bert( 2025-09-07T07:54:34.8839677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8840116Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8840503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8840891Z layer_outputs = layer_module( 2025-09-07T07:54:34.8841251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8841655Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8842047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8842450Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8842843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8843234Z return func(*args, **kwargs) 2025-09-07T07:54:34.8843610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8843988Z self_outputs = self.self( 2025-09-07T07:54:34.8844352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8844732Z return func(*args, **kwargs) 2025-09-07T07:54:34.8845262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8845709Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8845906Z 2025-09-07T07:54:34.8845988Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8846208Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8846425Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8846640Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8847126Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8847357Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8847581Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8847803Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8848024Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8848232Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8848440Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8848647Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8848880Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8849245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8849570Z return mod(**inputs) 2025-09-07T07:54:34.8849931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8850299Z outputs = self.bert( 2025-09-07T07:54:34.8850655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8851036Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8851408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8851786Z layer_outputs = layer_module( 2025-09-07T07:54:34.8852124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8852484Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8852868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8853254Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8853628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8854085Z return func(*args, **kwargs) 2025-09-07T07:54:34.8854451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8854829Z self_outputs = self.self( 2025-09-07T07:54:34.8855184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8855549Z return func(*args, **kwargs) 2025-09-07T07:54:34.8855965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8856402Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8856583Z 2025-09-07T07:54:34.8856669Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8856883Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8857089Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8857296Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8857507Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8857711Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8857908Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8858116Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8858324Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8858527Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8858728Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8858931Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8859171Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8859534Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8859855Z return mod(**inputs) 2025-09-07T07:54:34.8860195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1767, in forward 2025-09-07T07:54:34.8860565Z outputs = self.bert( 2025-09-07T07:54:34.8860919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1028, in forward 2025-09-07T07:54:34.8861292Z encoder_outputs = self.encoder( 2025-09-07T07:54:34.8861665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 675, in forward 2025-09-07T07:54:34.8862038Z layer_outputs = layer_module( 2025-09-07T07:54:34.8862385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:54:34.8862734Z return super().__call__(*args, **kwargs) 2025-09-07T07:54:34.8863099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 584, in forward 2025-09-07T07:54:34.8863476Z self_attention_outputs = self.attention( 2025-09-07T07:54:34.8863854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8864209Z return func(*args, **kwargs) 2025-09-07T07:54:34.8864559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 514, in forward 2025-09-07T07:54:34.8864919Z self_outputs = self.self( 2025-09-07T07:54:34.8865273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:54:34.8865623Z return func(*args, **kwargs) 2025-09-07T07:54:34.8865974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 438, in forward 2025-09-07T07:54:34.8866378Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:54:34.8866556Z 2025-09-07T07:54:34.8866631Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8866834Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8867034Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8867234Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8867489Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8867691Z cudagraph partition due to non gpu ops 2025-09-07T07:54:34.8867922Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8868288Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8868597Z return mod(**inputs) 2025-09-07T07:54:34.8868951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1799, in forward 2025-09-07T07:54:34.8869388Z start_loss = loss_fct(start_logits, start_positions) 2025-09-07T07:54:34.8869541Z 2025-09-07T07:54:34.8869651Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:54:34.8869991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:54:34.8870307Z return mod(**inputs) 2025-09-07T07:54:34.8870666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/bert/modeling_bert.py", line 1800, in forward 2025-09-07T07:54:34.8871072Z end_loss = loss_fct(end_logits, end_positions) 2025-09-07T07:54:34.8871221Z 2025-09-07T07:54:38.4635306Z Compilation time (from dynamo_timed): 26.864369497 2025-09-07T07:54:38.4635589Z pass 2025-09-07T07:54:38.4638768Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:54:38.4639756Z TIMING: _recursive_pre_grad_passes:0.03278 _recursive_joint_graph_passes:0.38622 _recursive_post_grad_passes:0.07283 linear_unary_template_precompiling:1.36421 linear_unary_template_autotuning:0.25803 async_compile.wait:0.73836 code_gen:3.14622 inductor_compile:19.21084 backend_compile:24.20497 gc:0.0014 entire_frame_compile:26.86437 total_wall_time:26.86437 2025-09-07T07:54:38.4641026Z STATS: call_* op count: 298 | FakeTensorMode.__torch_dispatch__:26743 | FakeTensor.__torch_dispatch__:2965 | ProxyTorchDispatchMode.__torch_dispatch__:7220 2025-09-07T07:54:38.4641534Z Dynamo produced 1 graphs covering 298 ops with 0 graph breaks (0 unique) 2025-09-07T07:54:41.1848701Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:54:41.1849517Z import pynvml # type: ignore[import] 2025-09-07T07:54:43.7686180Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:54:43.7687346Z from pkg_resources import resource_filename 2025-09-07T07:54:44.4341374Z 2025-09-07T07:55:03.2805185Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:55:03.2805507Z loading model: 0it [00:18, ?it/s] 2025-09-07T07:55:03.2805796Z cpu eval BlenderbotForCausalLM 2025-09-07T07:55:03.4758125Z Compilation time (from dynamo_timed): 0 2025-09-07T07:55:03.4758604Z pass_due_to_skip 2025-09-07T07:55:03.4758943Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:55:03.4759282Z TIMING: total_wall_time:0 2025-09-07T07:55:03.4759616Z STATS: call_* op count: 0 2025-09-07T07:55:03.4759979Z Dynamo produced 0 graphs covering 0 ops with 0 graph breaks (0 unique) 2025-09-07T07:55:05.4170017Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:55:05.4170886Z import pynvml # type: ignore[import] 2025-09-07T07:55:08.0541747Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:55:08.0543096Z from pkg_resources import resource_filename 2025-09-07T07:55:08.6911773Z 2025-09-07T07:55:09.4090640Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:55:09.4092763Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:55:09.4093018Z cpu eval BlenderbotSmallForCausalLM 2025-09-07T07:55:09.5643538Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:55:09.6228596Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:55:09.6798984Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:55:24.9655086Z Autotune Choices Stats: 2025-09-07T07:55:24.9655692Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.01210599998557882} 2025-09-07T07:55:24.9661133Z AUTOTUNE linear_unary(128x512, 512x512, 512) 2025-09-07T07:55:24.9661456Z strides: [512, 1], [1, 0], [1] 2025-09-07T07:55:24.9661726Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:55:24.9662026Z cpp_CppMicroGemmAMX_0 0.0121 ms 100.0% 2025-09-07T07:55:24.9662272Z _linear_pointwise 0.0640 ms 18.9% 2025-09-07T07:55:24.9662701Z SingleProcess AUTOTUNE benchmarking takes 0.2539 seconds and 1.3088 seconds precompiling for 2 choices 2025-09-07T07:55:27.0262867Z Autotune Choices Stats: 2025-09-07T07:55:27.0263455Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.044811999941885006} 2025-09-07T07:55:27.0267891Z AUTOTUNE linear_unary(128x512, 2048x512, 2048) 2025-09-07T07:55:27.0268209Z strides: [512, 1], [1, 0], [1] 2025-09-07T07:55:27.0268548Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:55:27.0268954Z cpp_CppMicroGemmAMX_4 0.0448 ms 100.0% 2025-09-07T07:55:27.0269272Z _linear_pointwise 0.0796 ms 56.3% 2025-09-07T07:55:27.0269815Z SingleProcess AUTOTUNE benchmarking takes 0.2627 seconds and 1.4228 seconds precompiling for 2 choices 2025-09-07T07:55:28.7065170Z Autotune Choices Stats: 2025-09-07T07:55:28.7069251Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.018420999822410522} 2025-09-07T07:55:28.7075089Z AUTOTUNE linear_unary(128x2048, 512x2048, 512) 2025-09-07T07:55:28.7075451Z strides: [2048, 1], [1, 0], [1] 2025-09-07T07:55:28.7078261Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:55:28.7078582Z cpp_CppMicroGemmAMX_5 0.0184 ms 100.0% 2025-09-07T07:55:28.7078843Z _linear_pointwise 0.0817 ms 22.5% 2025-09-07T07:55:28.7079282Z SingleProcess AUTOTUNE benchmarking takes 0.2602 seconds and 1.3434 seconds precompiling for 2 choices 2025-09-07T07:55:33.7776285Z Autotune Choices Stats: 2025-09-07T07:55:33.7776831Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_48", "best_time": 0.4910499999368767} 2025-09-07T07:55:33.7782188Z AUTOTUNE linear_unary(128x512, 50265x512) 2025-09-07T07:55:33.7782467Z strides: [512, 1], [1, 0] 2025-09-07T07:55:33.7782690Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T07:55:33.7782936Z cpp_CppMicroGemmAMX_48 0.4910 ms 100.0% 2025-09-07T07:55:33.7783167Z _linear_pointwise 0.8031 ms 61.1% 2025-09-07T07:55:33.7783561Z SingleProcess AUTOTUNE benchmarking takes 0.4152 seconds and 1.3061 seconds precompiling for 2 choices 2025-09-07T07:55:34.5527110Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5527448Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5527729Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5527966Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5528199Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5528435Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5529070Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5529291Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5529516Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5529717Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5529973Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5530193Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5530424Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5530647Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5530985Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5531253Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5531678Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5532047Z return mod(**inputs) 2025-09-07T07:55:34.5532549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5533044Z outputs = self.model.decoder( 2025-09-07T07:55:34.5533502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5533956Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5534333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5534737Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5535223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5535733Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5536253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5536766Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5537221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5537713Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5537908Z 2025-09-07T07:55:34.5538026Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5538396Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5538728Z return mod(**inputs) 2025-09-07T07:55:34.5539161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5539608Z outputs = self.model.decoder( 2025-09-07T07:55:34.5540052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5540531Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5540951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5541343Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5541810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5542315Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5542822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5543332Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5543774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5544274Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5544491Z 2025-09-07T07:55:34.5544584Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5544817Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5545249Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5545486Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5545702Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5545933Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5546165Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5546464Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5546665Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5546875Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5547115Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5547483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5547825Z return mod(**inputs) 2025-09-07T07:55:34.5548284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5548756Z outputs = self.model.decoder( 2025-09-07T07:55:34.5549226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5549703Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5550082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5550480Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5550956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5551449Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5551954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5552443Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5552915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5553509Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5553709Z 2025-09-07T07:55:34.5553831Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5554220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5554595Z return mod(**inputs) 2025-09-07T07:55:34.5555043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5555517Z outputs = self.model.decoder( 2025-09-07T07:55:34.5555980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5556441Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5556821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5557405Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5557893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5558391Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5558884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5559391Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5559859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5560383Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5560547Z 2025-09-07T07:55:34.5560635Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5560852Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5561079Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5561302Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5561523Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5561774Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5562000Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5562222Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5562441Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5562652Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5562901Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5563290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5563644Z return mod(**inputs) 2025-09-07T07:55:34.5564085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5564555Z outputs = self.model.decoder( 2025-09-07T07:55:34.5565019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5565494Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5565873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5566255Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5566727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5567430Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5567952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5568446Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5568909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5569401Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5569596Z 2025-09-07T07:55:34.5569705Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5570075Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5570410Z return mod(**inputs) 2025-09-07T07:55:34.5570831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5571287Z outputs = self.model.decoder( 2025-09-07T07:55:34.5571748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5572226Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5572576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5572946Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5573400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5573867Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5574333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5574793Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5575277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5575736Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5575898Z 2025-09-07T07:55:34.5575987Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5576206Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5576415Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5576679Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5576892Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5577103Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5577308Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5577519Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5577727Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5577936Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5578170Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5578537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5578869Z return mod(**inputs) 2025-09-07T07:55:34.5579323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5579798Z outputs = self.model.decoder( 2025-09-07T07:55:34.5580274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5580716Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5581071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5581450Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5581932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5582473Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5582975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5583475Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5583953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5584464Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5584667Z 2025-09-07T07:55:34.5584781Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5585176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5585539Z return mod(**inputs) 2025-09-07T07:55:34.5585990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5586456Z outputs = self.model.decoder( 2025-09-07T07:55:34.5586922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5587389Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5587772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5588168Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5588687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5589203Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5589658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5590734Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5591178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5591627Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5591798Z 2025-09-07T07:55:34.5591880Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5592141Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5592359Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5592567Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5592785Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5592998Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5593296Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5593502Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5593721Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5593930Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5594174Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5594532Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5594862Z return mod(**inputs) 2025-09-07T07:55:34.5595283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5595732Z outputs = self.model.decoder( 2025-09-07T07:55:34.5596174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5596606Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5596962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5597329Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5597773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5598236Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5598691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5599155Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5599599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5600082Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5600266Z 2025-09-07T07:55:34.5600428Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5600797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5601139Z return mod(**inputs) 2025-09-07T07:55:34.5601564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5602020Z outputs = self.model.decoder( 2025-09-07T07:55:34.5602469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5602952Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5603340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5603730Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5604210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5604736Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5605209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5605680Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5606129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5606592Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5606869Z 2025-09-07T07:55:34.5606973Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5607218Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5607452Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5607675Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5607906Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5608138Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5608376Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5608583Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5608799Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5609020Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5609262Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5609640Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5610026Z return mod(**inputs) 2025-09-07T07:55:34.5610523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5610982Z outputs = self.model.decoder( 2025-09-07T07:55:34.5611416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5611861Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5612224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5612592Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5613041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5613503Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5613971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5614439Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5614884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5615363Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5615548Z 2025-09-07T07:55:34.5615658Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5616029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5616364Z return mod(**inputs) 2025-09-07T07:55:34.5616794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5617238Z outputs = self.model.decoder( 2025-09-07T07:55:34.5617676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5618119Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5618477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5618844Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5619285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5619802Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5620272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5620762Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5621267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5621749Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5621925Z 2025-09-07T07:55:34.5622013Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5622248Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5622467Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5622682Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5622891Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5623134Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5623340Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5623549Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5623760Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5623969Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5624199Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5624568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5624900Z return mod(**inputs) 2025-09-07T07:55:34.5625324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5625762Z outputs = self.model.decoder( 2025-09-07T07:55:34.5626209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5626653Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5627012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5627376Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5627823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5628278Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5628729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5629187Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5629631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5630125Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5630322Z 2025-09-07T07:55:34.5630434Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5630825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5631164Z return mod(**inputs) 2025-09-07T07:55:34.5631580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5632023Z outputs = self.model.decoder( 2025-09-07T07:55:34.5632463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5632889Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5633239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5633643Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5634073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5634517Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5634965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5635410Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5635862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5636305Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5636470Z 2025-09-07T07:55:34.5636549Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5636758Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5636966Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5637162Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5637363Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5637566Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5637766Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5637960Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5638160Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5638361Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5638591Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5638933Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5639255Z return mod(**inputs) 2025-09-07T07:55:34.5639668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5640115Z outputs = self.model.decoder( 2025-09-07T07:55:34.5640555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5641003Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5641365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5641729Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5642179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5642641Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5643107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5643581Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5644028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:55:34.5644514Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:55:34.5644702Z 2025-09-07T07:55:34.5644809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5645398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5645760Z return mod(**inputs) 2025-09-07T07:55:34.5646217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1512, in forward 2025-09-07T07:55:34.5646692Z outputs = self.model.decoder( 2025-09-07T07:55:34.5647301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:55:34.5647801Z layer_outputs = decoder_layer( 2025-09-07T07:55:34.5648281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:55:34.5648642Z return super().__call__(*args, **kwargs) 2025-09-07T07:55:34.5649084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:55:34.5649541Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:55:34.5650068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:55:34.5650541Z attn_output, attn_weights = attention_interface( 2025-09-07T07:55:34.5650991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:55:34.5651457Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:55:34.5651617Z 2025-09-07T07:55:34.5651701Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5651915Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5652124Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5652335Z cudagraph partition due to non gpu ops 2025-09-07T07:55:34.5652625Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:55:34.5652999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:55:34.5653321Z return mod(**inputs) 2025-09-07T07:55:34.5653743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1534, in forward 2025-09-07T07:55:34.5654277Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T07:55:34.5654493Z 2025-09-07T07:55:40.6267506Z Compilation time (from dynamo_timed): 29.846274101 2025-09-07T07:55:40.6282932Z pass 2025-09-07T07:55:40.6283803Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:55:40.6285008Z TIMING: _recursive_pre_grad_passes:0.0259 _recursive_joint_graph_passes:0.55196 _recursive_post_grad_passes:0.05434 linear_unary_template_precompiling:5.38835 linear_unary_template_autotuning:1.18596 async_compile.wait:0.81039 code_gen:6.27495 inductor_compile:24.90335 backend_compile:28.29895 gc:0.0005 entire_frame_compile:29.84627 total_wall_time:29.84627 2025-09-07T07:55:40.6286218Z STATS: call_* op count: 254 | FakeTensorMode.__torch_dispatch__:18842 | FakeTensor.__torch_dispatch__:2109 | ProxyTorchDispatchMode.__torch_dispatch__:5091 2025-09-07T07:55:40.6286776Z Dynamo produced 1 graphs covering 254 ops with 0 graph breaks (0 unique) 2025-09-07T07:55:43.2135258Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:55:43.2136355Z import pynvml # type: ignore[import] 2025-09-07T07:55:45.8204079Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:55:45.8205089Z from pkg_resources import resource_filename 2025-09-07T07:55:46.4521278Z 2025-09-07T07:55:47.4023772Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:55:47.4024115Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:55:47.4024382Z cpu eval BlenderbotSmallForConditionalGeneration 2025-09-07T07:55:47.6693244Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:55:47.7981849Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:55:47.9082342Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:56:20.8624671Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8627745Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8628043Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8628259Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8628479Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8628712Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8631177Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8631866Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8632130Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8632350Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8632572Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8632779Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8632997Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8633208Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8633434Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8633684Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8634079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8634465Z return mod(**inputs) 2025-09-07T07:56:20.8634933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8635392Z outputs = self.model( 2025-09-07T07:56:20.8635821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8636270Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8636715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8637162Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8637535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8637992Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8638466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8638961Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8639467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8639936Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8640379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8640881Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8641090Z 2025-09-07T07:56:20.8641217Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8641608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8641965Z return mod(**inputs) 2025-09-07T07:56:20.8642437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8642917Z outputs = self.model( 2025-09-07T07:56:20.8643371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8643855Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8644320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8644800Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8645485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8645898Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8646395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8647042Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8647609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8648153Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8648633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8649137Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8649321Z 2025-09-07T07:56:20.8649410Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8649642Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8649865Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8650086Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8650338Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8650560Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8650781Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8651001Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8651220Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8651443Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8651698Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8652096Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8652448Z return mod(**inputs) 2025-09-07T07:56:20.8652884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8653333Z outputs = self.model( 2025-09-07T07:56:20.8653778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8654258Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8654722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8655195Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8655575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8655947Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8656407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8656898Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8657387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8657889Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8658365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8658882Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8659082Z 2025-09-07T07:56:20.8659198Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8659589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8659957Z return mod(**inputs) 2025-09-07T07:56:20.8660413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8661755Z outputs = self.model( 2025-09-07T07:56:20.8662204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8662679Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8663149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8663665Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8664057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8664439Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8664914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8665441Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8665919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8666395Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8666870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8667340Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8667510Z 2025-09-07T07:56:20.8667604Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8667825Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8668034Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8668252Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8668469Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8668683Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8668891Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8669108Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8669324Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8669537Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8682398Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8682852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8683236Z return mod(**inputs) 2025-09-07T07:56:20.8683765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8684269Z outputs = self.model( 2025-09-07T07:56:20.8684751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8685260Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8685755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8686253Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8686655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8687245Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8687755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8688270Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8688785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8689375Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8689833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8690481Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8690687Z 2025-09-07T07:56:20.8690808Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8691211Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8691576Z return mod(**inputs) 2025-09-07T07:56:20.8692090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8692616Z outputs = self.model( 2025-09-07T07:56:20.8693043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8693493Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8693940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8694393Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8694751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8695132Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8695588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8696056Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8696522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8696986Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8697438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8697906Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8698077Z 2025-09-07T07:56:20.8698175Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8698403Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8698621Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8698848Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8699072Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8699295Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8699512Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8699733Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8699955Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8700175Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8700423Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8700822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8701184Z return mod(**inputs) 2025-09-07T07:56:20.8701642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8702114Z outputs = self.model( 2025-09-07T07:56:20.8702561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8703039Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8703511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8703981Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8704359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8704757Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8705289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8705958Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8706455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8706956Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8707482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8708055Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8708255Z 2025-09-07T07:56:20.8708381Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8708786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8709134Z return mod(**inputs) 2025-09-07T07:56:20.8709591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8710061Z outputs = self.model( 2025-09-07T07:56:20.8710516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8710990Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8711463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8711943Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8712319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8712697Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8713154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8713617Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8714085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8714559Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8715016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8715483Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8715650Z 2025-09-07T07:56:20.8715733Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8715956Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8716174Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8716386Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8716594Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8716807Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8717021Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8717232Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8717438Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8717651Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8717898Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8718276Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8718604Z return mod(**inputs) 2025-09-07T07:56:20.8719035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8719519Z outputs = self.model( 2025-09-07T07:56:20.8719945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8720438Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8720877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8721329Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8721691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8722100Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8722555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8723010Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8723496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8723994Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8724475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8724964Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8725165Z 2025-09-07T07:56:20.8725281Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8725678Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8726030Z return mod(**inputs) 2025-09-07T07:56:20.8726483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8727027Z outputs = self.model( 2025-09-07T07:56:20.8727481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8727975Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8728466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8729037Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8729413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8729818Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8730309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8730837Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8731377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8731871Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8732351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8732842Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8733016Z 2025-09-07T07:56:20.8733112Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8733343Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8733562Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8733792Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8734014Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8734236Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8734450Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8734673Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8734897Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8735121Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8735422Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8735809Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8736162Z return mod(**inputs) 2025-09-07T07:56:20.8736615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8737077Z outputs = self.model( 2025-09-07T07:56:20.8737568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8738044Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8738474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8738900Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8739242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8739603Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8740049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8740510Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8740973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8741431Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8741881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8742361Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8742547Z 2025-09-07T07:56:20.8742661Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8743029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8743355Z return mod(**inputs) 2025-09-07T07:56:20.8743780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8744226Z outputs = self.model( 2025-09-07T07:56:20.8744642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8745277Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8745723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8746174Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8746537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8746917Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8747364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8747838Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8748308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8748809Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8749279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8749740Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8749924Z 2025-09-07T07:56:20.8750012Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8750242Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8750605Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8750816Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8751024Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8751238Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8751451Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8751665Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8751870Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8752095Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8752412Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8752775Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8753096Z return mod(**inputs) 2025-09-07T07:56:20.8753518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8753955Z outputs = self.model( 2025-09-07T07:56:20.8754370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8754808Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8755233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8755666Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8756022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8756383Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8756817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8757267Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8757721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8758178Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8758617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8759083Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8759271Z 2025-09-07T07:56:20.8759378Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8759735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8760067Z return mod(**inputs) 2025-09-07T07:56:20.8760481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8760913Z outputs = self.model( 2025-09-07T07:56:20.8761342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8761791Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8762233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8762675Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8763029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8763399Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8763848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8764310Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8764766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8765270Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8765720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8766208Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8766385Z 2025-09-07T07:56:20.8766481Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8766734Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8767033Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8767261Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8767481Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8767695Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8767918Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8768144Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8768375Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8768593Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8768835Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8769207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8769544Z return mod(**inputs) 2025-09-07T07:56:20.8769973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8770410Z outputs = self.model( 2025-09-07T07:56:20.8770837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8771292Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8771731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8772178Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8772527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8772894Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8773345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8773804Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8774263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8774728Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8775176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8775664Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8775851Z 2025-09-07T07:56:20.8775964Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8776324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8776658Z return mod(**inputs) 2025-09-07T07:56:20.8777080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8777531Z outputs = self.model( 2025-09-07T07:56:20.8777941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1195, in forward 2025-09-07T07:56:20.8778378Z encoder_outputs = self.encoder( 2025-09-07T07:56:20.8778827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 812, in forward 2025-09-07T07:56:20.8779297Z layer_outputs = encoder_layer( 2025-09-07T07:56:20.8779648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8780017Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8780463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 296, in forward 2025-09-07T07:56:20.8780929Z hidden_states, attn_weights = self.self_attn( 2025-09-07T07:56:20.8781422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8781901Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8782339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8782785Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8782948Z 2025-09-07T07:56:20.8783035Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8783247Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8783449Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8783658Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8783867Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8784070Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8784270Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8784480Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8784685Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8784888Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8785116Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8785476Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8785806Z return mod(**inputs) 2025-09-07T07:56:20.8786228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8786681Z outputs = self.model( 2025-09-07T07:56:20.8787085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8787518Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8787953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8788385Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8788728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8789090Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8789525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8790000Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8790470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8790931Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8791377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8791863Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8792044Z 2025-09-07T07:56:20.8792156Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8792515Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8792834Z return mod(**inputs) 2025-09-07T07:56:20.8793249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8793716Z outputs = self.model( 2025-09-07T07:56:20.8794128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8794567Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8795029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8795471Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8795822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8796183Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8796610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8797072Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8797528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8797980Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8798419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8798864Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8799031Z 2025-09-07T07:56:20.8799112Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8799327Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8799542Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8799756Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8799959Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8800171Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8800382Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8800590Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8800825Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8801204Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8801557Z return mod(**inputs) 2025-09-07T07:56:20.8802013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8802476Z outputs = self.model( 2025-09-07T07:56:20.8802924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8803391Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8803855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8804325Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8804694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8805079Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8805527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8806003Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8806513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8807081Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8807565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8808123Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8808320Z 2025-09-07T07:56:20.8808442Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8808840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8809166Z return mod(**inputs) 2025-09-07T07:56:20.8809629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8810105Z outputs = self.model( 2025-09-07T07:56:20.8810531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8810982Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8811430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8811885Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8812244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8812615Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8813069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8813551Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8814032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8814516Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8814966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8815419Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8815591Z 2025-09-07T07:56:20.8815674Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8815893Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8816109Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8816314Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8816527Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8816737Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8816952Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8817156Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8817366Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8817575Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8817813Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8818172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8818512Z return mod(**inputs) 2025-09-07T07:56:20.8818944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8819387Z outputs = self.model( 2025-09-07T07:56:20.8819806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8820239Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8820690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8821107Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8821446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8821795Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8822210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8822696Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8823159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8823620Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8824087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8824538Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8824722Z 2025-09-07T07:56:20.8824823Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8825172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8825488Z return mod(**inputs) 2025-09-07T07:56:20.8825886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8826304Z outputs = self.model( 2025-09-07T07:56:20.8826705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8827131Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8827551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8827966Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8828305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8828656Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8829086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8829536Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8829972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8830414Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8830838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8831278Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8831431Z 2025-09-07T07:56:20.8831517Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8831716Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8831922Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8832123Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8832325Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8832523Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8832727Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8832928Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8833159Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8833501Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8833817Z return mod(**inputs) 2025-09-07T07:56:20.8834222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8834642Z outputs = self.model( 2025-09-07T07:56:20.8835048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8835466Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8835883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8836343Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8836683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8837034Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8837456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8837942Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8838401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8838859Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8839291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8839770Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8839958Z 2025-09-07T07:56:20.8840063Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8840435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8840767Z return mod(**inputs) 2025-09-07T07:56:20.8841188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8841635Z outputs = self.model( 2025-09-07T07:56:20.8842102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8842588Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8843071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8843552Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8843940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8844339Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8844827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8845527Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8846054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8846732Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8847289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8847824Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8848001Z 2025-09-07T07:56:20.8848092Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8848307Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8848527Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8848746Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8848956Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8849163Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8849386Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8849590Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8849790Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8850023Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8850262Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8850621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8851038Z return mod(**inputs) 2025-09-07T07:56:20.8851450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8851892Z outputs = self.model( 2025-09-07T07:56:20.8852294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8852717Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8853196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8853610Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8853951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8854307Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8854749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8855205Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8855652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8856117Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8856625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8857106Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8857286Z 2025-09-07T07:56:20.8857395Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8857743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8858076Z return mod(**inputs) 2025-09-07T07:56:20.8858491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8858928Z outputs = self.model( 2025-09-07T07:56:20.8859345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8859811Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8860293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8860736Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8861091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8861455Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8861907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8862369Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8862819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8863271Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8863702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8864141Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8864306Z 2025-09-07T07:56:20.8864387Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8864602Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8864811Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8865008Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8865260Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8865467Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8865674Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8865873Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8866104Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8866457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8866780Z return mod(**inputs) 2025-09-07T07:56:20.8867222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8867666Z outputs = self.model( 2025-09-07T07:56:20.8868079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8868513Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8868946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8869371Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8869728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8870101Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8870554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8871025Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8871501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8871962Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8872407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8872888Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8873074Z 2025-09-07T07:56:20.8873181Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8873550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8873880Z return mod(**inputs) 2025-09-07T07:56:20.8874304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8874741Z outputs = self.model( 2025-09-07T07:56:20.8875153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8875596Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8876036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8876481Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8876839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8877201Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8877652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8878128Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8878603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8879066Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8879505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8880002Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8880175Z 2025-09-07T07:56:20.8880261Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8880487Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8880702Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8880924Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8881140Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8881398Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8881615Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8881849Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8882066Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8882276Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8882508Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8882876Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8883223Z return mod(**inputs) 2025-09-07T07:56:20.8883679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8884146Z outputs = self.model( 2025-09-07T07:56:20.8884588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8885064Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8885530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8885999Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8886380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8886771Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8887346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8887871Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8888376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8888853Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8889304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8889777Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8889984Z 2025-09-07T07:56:20.8890086Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8890447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8890777Z return mod(**inputs) 2025-09-07T07:56:20.8891191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8891621Z outputs = self.model( 2025-09-07T07:56:20.8892030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8892463Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8892888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8893334Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8893673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8894025Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8894505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8894957Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8895413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8895867Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8896347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8896801Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8896968Z 2025-09-07T07:56:20.8897051Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8897273Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8897496Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8897706Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8897904Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8898111Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8898315Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8898522Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8898760Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8899116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8899444Z return mod(**inputs) 2025-09-07T07:56:20.8899850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8900277Z outputs = self.model( 2025-09-07T07:56:20.8900681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8901121Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8901549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8901987Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8902324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8902675Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8903106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8903562Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8904018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8904459Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8904888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8905343Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8905517Z 2025-09-07T07:56:20.8905626Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8905984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8906312Z return mod(**inputs) 2025-09-07T07:56:20.8906726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8907160Z outputs = self.model( 2025-09-07T07:56:20.8907558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8907978Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8908440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8908863Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8909200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8909551Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8910003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8910454Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8910911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8911355Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8911783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8912220Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8912375Z 2025-09-07T07:56:20.8912455Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8912668Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8912875Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8913075Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8913271Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8913472Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8913674Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8913876Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8914068Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8914267Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8914496Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8914849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8915158Z return mod(**inputs) 2025-09-07T07:56:20.8915563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8915976Z outputs = self.model( 2025-09-07T07:56:20.8916379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8916806Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8917227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8917659Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8918014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8918365Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8918789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8919232Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8919679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8920134Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8920569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8921043Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8921228Z 2025-09-07T07:56:20.8921333Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8921728Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8922056Z return mod(**inputs) 2025-09-07T07:56:20.8922471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8922892Z outputs = self.model( 2025-09-07T07:56:20.8923320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8923800Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8924243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8924690Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8925044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8925416Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8925867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8926334Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8926895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8927410Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8927894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8928393Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8928570Z 2025-09-07T07:56:20.8928668Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8928903Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8929130Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8929358Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8929585Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8929800Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8930006Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8930218Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8930463Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8930880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8931240Z return mod(**inputs) 2025-09-07T07:56:20.8931671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8932116Z outputs = self.model( 2025-09-07T07:56:20.8932555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8933011Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8933452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8933926Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8934276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8934639Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8935077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8935562Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8936050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8936566Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8937010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8937485Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8937679Z 2025-09-07T07:56:20.8937786Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8938161Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8938529Z return mod(**inputs) 2025-09-07T07:56:20.8938953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8939392Z outputs = self.model( 2025-09-07T07:56:20.8939828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8940291Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8940746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8941203Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8941557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8941938Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8942400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8942895Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8943379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8943863Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8944328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8944799Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8944965Z 2025-09-07T07:56:20.8945194Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8945420Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8945639Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8945855Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8946069Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8946277Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8946492Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8946710Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8946937Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8947152Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8947412Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8947785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8948124Z return mod(**inputs) 2025-09-07T07:56:20.8948555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8948989Z outputs = self.model( 2025-09-07T07:56:20.8949420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8949866Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8950308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8950751Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8951107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8951558Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8952016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8952494Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8953048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8953512Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8953960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8954446Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8954634Z 2025-09-07T07:56:20.8954750Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8955113Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8955469Z return mod(**inputs) 2025-09-07T07:56:20.8955889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8956319Z outputs = self.model( 2025-09-07T07:56:20.8956741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8957185Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8957637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8958070Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8958425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8958812Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8959246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8959704Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8960181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8960685Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8961153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8961638Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8961817Z 2025-09-07T07:56:20.8961904Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8962134Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8962358Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8962573Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8962797Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8963018Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8963243Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8963458Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8963713Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8964104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8964462Z return mod(**inputs) 2025-09-07T07:56:20.8964910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8965370Z outputs = self.model( 2025-09-07T07:56:20.8965825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8966360Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8966891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8967385Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8967786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8968235Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8968745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8969213Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8969675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8970141Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8970595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8971085Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8971271Z 2025-09-07T07:56:20.8971386Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8971748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8972094Z return mod(**inputs) 2025-09-07T07:56:20.8972515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8972952Z outputs = self.model( 2025-09-07T07:56:20.8973363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8973797Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8974242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8974690Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8975048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8975419Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8975863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.8976340Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.8976817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8977286Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8977726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8978189Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8978357Z 2025-09-07T07:56:20.8978439Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8978656Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8978869Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8979076Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8979286Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8979496Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8979705Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8979908Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8980120Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8980330Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8980639Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8981000Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8981338Z return mod(**inputs) 2025-09-07T07:56:20.8981766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8982209Z outputs = self.model( 2025-09-07T07:56:20.8982668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8983108Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8983553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8984002Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8984358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8984719Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8985174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8985654Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8986107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8986555Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8986983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.8987449Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.8987637Z 2025-09-07T07:56:20.8987741Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8988099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8988432Z return mod(**inputs) 2025-09-07T07:56:20.8988888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8989317Z outputs = self.model( 2025-09-07T07:56:20.8989732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8990169Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8990602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8991029Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.8991392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.8991748Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.8992182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.8992638Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.8993090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.8993546Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.8993979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.8994423Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.8994582Z 2025-09-07T07:56:20.8994669Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8994912Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8995124Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8995336Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8995544Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8995745Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8995954Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8996165Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.8996437Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.8996797Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.8997131Z return mod(**inputs) 2025-09-07T07:56:20.8997561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.8998004Z outputs = self.model( 2025-09-07T07:56:20.8998425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.8998872Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.8999301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.8999732Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.9000083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.9000437Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.9000874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.9001354Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.9001836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.9002310Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.9002754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.9003234Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.9003424Z 2025-09-07T07:56:20.9003535Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.9003925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.9004275Z return mod(**inputs) 2025-09-07T07:56:20.9004713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.9005180Z outputs = self.model( 2025-09-07T07:56:20.9005624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.9006102Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.9006583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.9007146Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.9007554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.9007981Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.9008474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.9008985Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.9009495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.9010001Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.9010450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.9010912Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.9011075Z 2025-09-07T07:56:20.9011158Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9011377Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9011638Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9011850Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9012051Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9012259Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9012469Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9012679Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9012886Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9013091Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9013333Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.9013700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.9014032Z return mod(**inputs) 2025-09-07T07:56:20.9014447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.9014891Z outputs = self.model( 2025-09-07T07:56:20.9015315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.9015782Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.9016249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.9016684Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.9017046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.9017419Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.9017869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.9018331Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.9018804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.9019272Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.9019725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.9020182Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.9020365Z 2025-09-07T07:56:20.9020468Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.9020825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.9021147Z return mod(**inputs) 2025-09-07T07:56:20.9021574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.9021988Z outputs = self.model( 2025-09-07T07:56:20.9022384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.9022806Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.9023253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.9023683Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.9024072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.9024420Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.9024848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 398, in forward 2025-09-07T07:56:20.9025292Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T07:56:20.9025766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.9026211Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.9026628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.9027066Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.9027230Z 2025-09-07T07:56:20.9027308Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9027514Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9027715Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9027918Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9028118Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9028316Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9028506Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9028704Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9028930Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.9029279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.9029600Z return mod(**inputs) 2025-09-07T07:56:20.9030013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.9030453Z outputs = self.model( 2025-09-07T07:56:20.9030877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.9031318Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.9031754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.9032185Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.9032533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.9032891Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.9033326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.9033782Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.9034251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.9034357Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.9034641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T07:56:20.9034767Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:56:20.9034777Z 2025-09-07T07:56:20.9034885Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.9035082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.9035159Z return mod(**inputs) 2025-09-07T07:56:20.9035467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1375, in forward 2025-09-07T07:56:20.9035547Z outputs = self.model( 2025-09-07T07:56:20.9035899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1213, in forward 2025-09-07T07:56:20.9035975Z decoder_outputs = self.decoder( 2025-09-07T07:56:20.9036293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1057, in forward 2025-09-07T07:56:20.9036369Z layer_outputs = decoder_layer( 2025-09-07T07:56:20.9036638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:56:20.9036725Z return super().__call__(*args, **kwargs) 2025-09-07T07:56:20.9037032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 415, in forward 2025-09-07T07:56:20.9037150Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T07:56:20.9037457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 237, in forward 2025-09-07T07:56:20.9037565Z attn_output, attn_weights = attention_interface( 2025-09-07T07:56:20.9037855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T07:56:20.9037972Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T07:56:20.9037976Z 2025-09-07T07:56:20.9038058Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9038141Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9038228Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9038305Z cudagraph partition due to non gpu ops 2025-09-07T07:56:20.9038419Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:56:20.9038621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:56:20.9038690Z return mod(**inputs) 2025-09-07T07:56:20.9039019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/blenderbot_small/modeling_blenderbot_small.py", line 1398, in forward 2025-09-07T07:56:20.9039187Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T07:56:20.9039192Z 2025-09-07T07:56:34.5307364Z Compilation time (from dynamo_timed): 45.471318476 2025-09-07T07:56:34.5323655Z pass 2025-09-07T07:56:34.5324160Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:56:34.5325290Z TIMING: _recursive_pre_grad_passes:0.32872 _recursive_joint_graph_passes:0.5881 _recursive_post_grad_passes:0.10504 linear_unary_template_precompiling:0.02204 async_compile.wait:0.77406 code_gen:13.32944 inductor_compile:32.75538 backend_compile:41.71112 gc:0.00072 entire_frame_compile:45.47132 total_wall_time:45.47132 2025-09-07T07:56:34.5326539Z STATS: call_* op count: 654 | FakeTensorMode.__torch_dispatch__:47166 | FakeTensor.__torch_dispatch__:5092 | ProxyTorchDispatchMode.__torch_dispatch__:12894 2025-09-07T07:56:34.5327319Z Dynamo produced 1 graphs covering 654 ops with 0 graph breaks (0 unique) 2025-09-07T07:56:37.4564567Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:56:37.4565393Z import pynvml # type: ignore[import] 2025-09-07T07:56:40.1269574Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:56:40.1270522Z from pkg_resources import resource_filename 2025-09-07T07:56:40.7647315Z 2025-09-07T07:56:41.9848093Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:56:41.9848504Z loading model: 0it [00:01, ?it/s] 2025-09-07T07:56:41.9854740Z cpu eval CamemBert 2025-09-07T07:56:42.4642995Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:56:42.5918882Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:56:42.7258628Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:57:06.9555945Z Autotune Choices Stats: 2025-09-07T07:57:06.9559632Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 2.2689370000534836} 2025-09-07T07:57:06.9561566Z AUTOTUNE linear_unary(512x768, 32005x768, 32005) 2025-09-07T07:57:06.9561941Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:57:06.9562633Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:57:06.9562995Z _linear_pointwise 2.2689 ms 100.0% 2025-09-07T07:57:06.9563244Z cpp_CppMicroGemmAMX_73 2.6003 ms 87.3% 2025-09-07T07:57:06.9563644Z SingleProcess AUTOTUNE benchmarking takes 0.6294 seconds and 1.3576 seconds precompiling for 2 choices 2025-09-07T07:57:07.3755211Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3755752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3756107Z return mod(**inputs) 2025-09-07T07:57:07.3756587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3757023Z outputs = self.roberta( 2025-09-07T07:57:07.3757435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-09-07T07:57:07.3757867Z embedding_output = self.embeddings( 2025-09-07T07:57:07.3758284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-09-07T07:57:07.3758875Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T07:57:07.3759531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1590, in create_position_ids_from_input_ids 2025-09-07T07:57:07.3760103Z mask = input_ids.ne(padding_idx).int() 2025-09-07T07:57:07.3760280Z 2025-09-07T07:57:07.3760390Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3760624Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3760854Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3761085Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3761315Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3761532Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3761758Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3761990Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3762217Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3762433Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3762657Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3762880Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3763139Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3763529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3763893Z return mod(**inputs) 2025-09-07T07:57:07.3764343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3764810Z outputs = self.roberta( 2025-09-07T07:57:07.3765237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-09-07T07:57:07.3765687Z embedding_output = self.embeddings( 2025-09-07T07:57:07.3766444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-09-07T07:57:07.3767307Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T07:57:07.3767987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1591, in create_position_ids_from_input_ids 2025-09-07T07:57:07.3768745Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T07:57:07.3769006Z 2025-09-07T07:57:07.3769131Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3769500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3769832Z return mod(**inputs) 2025-09-07T07:57:07.3770230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3770646Z outputs = self.roberta( 2025-09-07T07:57:07.3771061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 886, in forward 2025-09-07T07:57:07.3771505Z embedding_output = self.embeddings( 2025-09-07T07:57:07.3771944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 90, in forward 2025-09-07T07:57:07.3772529Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T07:57:07.3773128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1591, in create_position_ids_from_input_ids 2025-09-07T07:57:07.3773748Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T07:57:07.3774009Z 2025-09-07T07:57:07.3774099Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3774332Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3774558Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3774774Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3774998Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3775220Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3775442Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3775689Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3776086Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3776440Z return mod(**inputs) 2025-09-07T07:57:07.3776944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3777376Z outputs = self.roberta( 2025-09-07T07:57:07.3777794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3778231Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3778661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3779098Z layer_outputs = layer_module( 2025-09-07T07:57:07.3779481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3779880Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3780324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3780772Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3781192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3781597Z return func(*args, **kwargs) 2025-09-07T07:57:07.3782068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3782500Z self_outputs = self.self( 2025-09-07T07:57:07.3782897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3783300Z return func(*args, **kwargs) 2025-09-07T07:57:07.3783770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3784276Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3784474Z 2025-09-07T07:57:07.3784561Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3784865Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3785090Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3785309Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3785526Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3785749Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3785971Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3786192Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3786407Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3786629Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3786849Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3787070Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3787327Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3787711Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3788063Z return mod(**inputs) 2025-09-07T07:57:07.3788501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3788937Z outputs = self.roberta( 2025-09-07T07:57:07.3789349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3789787Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3790222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3790658Z layer_outputs = layer_module( 2025-09-07T07:57:07.3791042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3791429Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3791872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3792322Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3792738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3793136Z return func(*args, **kwargs) 2025-09-07T07:57:07.3793556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3793988Z self_outputs = self.self( 2025-09-07T07:57:07.3794376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3794780Z return func(*args, **kwargs) 2025-09-07T07:57:07.3795179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3795676Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3795880Z 2025-09-07T07:57:07.3795966Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3796197Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3796425Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3796684Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3796908Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3797137Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3797357Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3797571Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3797793Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3798015Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3798233Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3798544Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3798824Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3799227Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3799587Z return mod(**inputs) 2025-09-07T07:57:07.3800001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3800449Z outputs = self.roberta( 2025-09-07T07:57:07.3800873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3801314Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3801752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3802186Z layer_outputs = layer_module( 2025-09-07T07:57:07.3802572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3802972Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3803416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3803859Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3804276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3804684Z return func(*args, **kwargs) 2025-09-07T07:57:07.3805126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3805579Z self_outputs = self.self( 2025-09-07T07:57:07.3805981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3806402Z return func(*args, **kwargs) 2025-09-07T07:57:07.3806899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3807426Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3807632Z 2025-09-07T07:57:07.3807728Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3807956Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3808195Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3808425Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3808656Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3808889Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3809100Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3809314Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3809526Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3809733Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3809946Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3810158Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3810399Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3810764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3811099Z return mod(**inputs) 2025-09-07T07:57:07.3811497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3811976Z outputs = self.roberta( 2025-09-07T07:57:07.3812370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3812781Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3813191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3813635Z layer_outputs = layer_module( 2025-09-07T07:57:07.3813995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3814360Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3814781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3815217Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3815619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3816002Z return func(*args, **kwargs) 2025-09-07T07:57:07.3816403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3816822Z self_outputs = self.self( 2025-09-07T07:57:07.3817198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3817582Z return func(*args, **kwargs) 2025-09-07T07:57:07.3817985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3818480Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3818686Z 2025-09-07T07:57:07.3818771Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3819010Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3819237Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3819453Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3819676Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3819901Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3820124Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3820340Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3820563Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3820788Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3821014Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3821230Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3821490Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3821888Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3822249Z return mod(**inputs) 2025-09-07T07:57:07.3822672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3823122Z outputs = self.roberta( 2025-09-07T07:57:07.3823552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3823995Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3824454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3824890Z layer_outputs = layer_module( 2025-09-07T07:57:07.3825272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3825669Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3826116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3826642Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3827058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3827470Z return func(*args, **kwargs) 2025-09-07T07:57:07.3827907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3828355Z self_outputs = self.self( 2025-09-07T07:57:07.3828789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3829191Z return func(*args, **kwargs) 2025-09-07T07:57:07.3829618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3830116Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3830316Z 2025-09-07T07:57:07.3830409Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3830631Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3830855Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3831073Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3831294Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3831509Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3831727Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3831945Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3832165Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3832376Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3832597Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3832818Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3833070Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3833452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3833809Z return mod(**inputs) 2025-09-07T07:57:07.3834202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3834616Z outputs = self.roberta( 2025-09-07T07:57:07.3835010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3835417Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3835833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3836245Z layer_outputs = layer_module( 2025-09-07T07:57:07.3836601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3836971Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3837386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3837814Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3838205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3838585Z return func(*args, **kwargs) 2025-09-07T07:57:07.3838981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3839405Z self_outputs = self.self( 2025-09-07T07:57:07.3839795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3840214Z return func(*args, **kwargs) 2025-09-07T07:57:07.3840648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3841182Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3841428Z 2025-09-07T07:57:07.3841513Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3841741Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3841968Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3842182Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3842409Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3842632Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3842854Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3843105Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3843336Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3843566Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3843792Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3844022Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3844276Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3844673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3845249Z return mod(**inputs) 2025-09-07T07:57:07.3845679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3846108Z outputs = self.roberta( 2025-09-07T07:57:07.3846543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3847064Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3847522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3847974Z layer_outputs = layer_module( 2025-09-07T07:57:07.3848369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3848762Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3849210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3849656Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3850063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3850469Z return func(*args, **kwargs) 2025-09-07T07:57:07.3850908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3851347Z self_outputs = self.self( 2025-09-07T07:57:07.3851741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3852136Z return func(*args, **kwargs) 2025-09-07T07:57:07.3852563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3853064Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3853262Z 2025-09-07T07:57:07.3853358Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3853588Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3853807Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3854033Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3854258Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3854482Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3854702Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3854928Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3855155Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3855376Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3855593Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3855817Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3856074Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3856565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3856913Z return mod(**inputs) 2025-09-07T07:57:07.3857335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3857775Z outputs = self.roberta( 2025-09-07T07:57:07.3858252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3858693Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3859116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3859550Z layer_outputs = layer_module( 2025-09-07T07:57:07.3859925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3860321Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3860734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3861159Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3861553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3861938Z return func(*args, **kwargs) 2025-09-07T07:57:07.3862492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3862912Z self_outputs = self.self( 2025-09-07T07:57:07.3863282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3863661Z return func(*args, **kwargs) 2025-09-07T07:57:07.3864059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3864529Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3864713Z 2025-09-07T07:57:07.3864797Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3865018Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3865236Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3865449Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3865653Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3865867Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3866079Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3866289Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3866493Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3866701Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3866911Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3867121Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3867360Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3867729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3868063Z return mod(**inputs) 2025-09-07T07:57:07.3868456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3868859Z outputs = self.roberta( 2025-09-07T07:57:07.3869258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3869670Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3870084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3870493Z layer_outputs = layer_module( 2025-09-07T07:57:07.3870842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3871276Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3871697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3872122Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3872515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3872923Z return func(*args, **kwargs) 2025-09-07T07:57:07.3873324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3873741Z self_outputs = self.self( 2025-09-07T07:57:07.3874113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3874486Z return func(*args, **kwargs) 2025-09-07T07:57:07.3874887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3875355Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3875540Z 2025-09-07T07:57:07.3875628Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3875842Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3876046Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3876259Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3876474Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3876685Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3876887Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3877096Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3877305Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3877512Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3877714Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3877924Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3878161Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3878527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3878858Z return mod(**inputs) 2025-09-07T07:57:07.3879239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3879642Z outputs = self.roberta( 2025-09-07T07:57:07.3880035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3880448Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3880854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3881264Z layer_outputs = layer_module( 2025-09-07T07:57:07.3881623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3881990Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3882408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3882824Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3883217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3883644Z return func(*args, **kwargs) 2025-09-07T07:57:07.3884064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3884490Z self_outputs = self.self( 2025-09-07T07:57:07.3884881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3885901Z return func(*args, **kwargs) 2025-09-07T07:57:07.3886328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3886927Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3887138Z 2025-09-07T07:57:07.3887228Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3887469Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3887705Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3887981Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3888209Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3888417Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3888666Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3888872Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3889077Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3889272Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3889480Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3889684Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3889920Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3890294Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3890620Z return mod(**inputs) 2025-09-07T07:57:07.3891016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3891441Z outputs = self.roberta( 2025-09-07T07:57:07.3891841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3892249Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3892661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3893078Z layer_outputs = layer_module( 2025-09-07T07:57:07.3893431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3893800Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3894214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3894638Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3895032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3895414Z return func(*args, **kwargs) 2025-09-07T07:57:07.3895815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3896220Z self_outputs = self.self( 2025-09-07T07:57:07.3896592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3896972Z return func(*args, **kwargs) 2025-09-07T07:57:07.3897373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3897836Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3898028Z 2025-09-07T07:57:07.3898110Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3898326Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3898544Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3898756Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3898961Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3899173Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3899383Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3899595Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3899799Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3900049Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3900262Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3900476Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3900712Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3901092Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3901428Z return mod(**inputs) 2025-09-07T07:57:07.3901871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1038, in forward 2025-09-07T07:57:07.3902304Z outputs = self.roberta( 2025-09-07T07:57:07.3902719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 950, in forward 2025-09-07T07:57:07.3903147Z encoder_outputs = self.encoder( 2025-09-07T07:57:07.3903560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 632, in forward 2025-09-07T07:57:07.3903974Z layer_outputs = layer_module( 2025-09-07T07:57:07.3904326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:57:07.3904697Z return super().__call__(*args, **kwargs) 2025-09-07T07:57:07.3905122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 540, in forward 2025-09-07T07:57:07.3905549Z self_attention_outputs = self.attention( 2025-09-07T07:57:07.3905936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3906319Z return func(*args, **kwargs) 2025-09-07T07:57:07.3906708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 467, in forward 2025-09-07T07:57:07.3907109Z self_outputs = self.self( 2025-09-07T07:57:07.3907471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T07:57:07.3907834Z return func(*args, **kwargs) 2025-09-07T07:57:07.3908227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 389, in forward 2025-09-07T07:57:07.3908684Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:57:07.3908867Z 2025-09-07T07:57:07.3908955Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3909174Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3909382Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3909594Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3909803Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3910013Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3910214Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3910429Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3910655Z cudagraph partition due to non gpu ops 2025-09-07T07:57:07.3910906Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:57:07.3911281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:57:07.3911614Z return mod(**inputs) 2025-09-07T07:57:07.3912010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/camembert/modeling_camembert.py", line 1059, in forward 2025-09-07T07:57:07.3912551Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T07:57:07.3912796Z 2025-09-07T07:57:11.2350869Z Compilation time (from dynamo_timed): 27.245931198 2025-09-07T07:57:11.2388926Z pass 2025-09-07T07:57:11.2391198Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:57:11.2392325Z TIMING: _recursive_pre_grad_passes:0.03284 _recursive_joint_graph_passes:0.37789 _recursive_post_grad_passes:0.06911 linear_unary_template_precompiling:1.37039 linear_unary_template_autotuning:0.62779 async_compile.wait:0.8094 code_gen:3.30257 inductor_compile:19.69333 backend_compile:24.56897 gc:0.00061 entire_frame_compile:27.24593 total_wall_time:27.24593 2025-09-07T07:57:11.2393814Z STATS: call_* op count: 299 | FakeTensorMode.__torch_dispatch__:27079 | FakeTensor.__torch_dispatch__:2994 | ProxyTorchDispatchMode.__torch_dispatch__:7246 2025-09-07T07:57:11.2394312Z Dynamo produced 1 graphs covering 299 ops with 0 graph breaks (0 unique) 2025-09-07T07:57:13.9953649Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:57:13.9954881Z import pynvml # type: ignore[import] 2025-09-07T07:57:16.5939453Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:57:16.5941250Z from pkg_resources import resource_filename 2025-09-07T07:57:17.2239773Z 2025-09-07T07:57:25.7172340Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:57:25.7174064Z loading model: 0it [00:08, ?it/s] 2025-09-07T07:57:25.7179188Z cpu eval DebertaV2ForMaskedLM 2025-09-07T07:57:25.8479446Z Compilation time (from dynamo_timed): 0 2025-09-07T07:57:25.8480413Z pass_due_to_skip 2025-09-07T07:57:25.8483613Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:57:25.8484096Z TIMING: total_wall_time:0 2025-09-07T07:57:25.8484402Z STATS: call_* op count: 0 2025-09-07T07:57:25.8484711Z Dynamo produced 0 graphs covering 0 ops with 0 graph breaks (0 unique) 2025-09-07T07:57:27.7229994Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:57:27.7230801Z import pynvml # type: ignore[import] 2025-09-07T07:57:30.3762618Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:57:30.3766024Z from pkg_resources import resource_filename 2025-09-07T07:57:31.0365865Z 2025-09-07T07:57:38.0587787Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:57:38.0589377Z loading model: 0it [00:07, ?it/s] 2025-09-07T07:57:38.0589638Z cpu eval DebertaV2ForQuestionAnswering 2025-09-07T07:57:41.3106459Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:57:42.5429374Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:57:43.7686221Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:58:12.4158427Z Autotune Choices Stats: 2025-09-07T07:58:12.4159108Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.14090199965721695} 2025-09-07T07:58:12.4164361Z AUTOTUNE linear_unary(512x1536, 1536x1536, 1536) 2025-09-07T07:58:12.4167637Z strides: [1536, 1], [1, 0], [1] 2025-09-07T07:58:12.4168319Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:58:12.4168715Z cpp_CppMicroGemmAMX_0 0.1409 ms 100.0% 2025-09-07T07:58:12.4168961Z _linear_pointwise 0.1957 ms 72.0% 2025-09-07T07:58:12.4169339Z SingleProcess AUTOTUNE benchmarking takes 0.2867 seconds and 1.4348 seconds precompiling for 2 choices 2025-09-07T07:58:14.5670432Z Autotune Choices Stats: 2025-09-07T07:58:14.5670886Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_2", "best_time": 0.2286189996993926} 2025-09-07T07:58:14.5680753Z AUTOTUNE bmm(24x512x64, 24x64x512) 2025-09-07T07:58:14.5681027Z strides: [32768, 64, 1], [32768, 512, 1] 2025-09-07T07:58:14.5681285Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T07:58:14.5681544Z cpp_CppMicroGemmAMX_2 0.2286 ms 100.0% 2025-09-07T07:58:14.5683170Z bmm 1.3945 ms 16.4% 2025-09-07T07:58:14.5683548Z SingleProcess AUTOTUNE benchmarking takes 0.4123 seconds and 1.4885 seconds precompiling for 2 choices 2025-09-07T07:58:16.6945474Z Autotune Choices Stats: 2025-09-07T07:58:16.6945946Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.10922650017164415} 2025-09-07T07:58:16.6946359Z AUTOTUNE bmm(24x512x512, 24x512x64) 2025-09-07T07:58:16.6946630Z strides: [262144, 512, 1], [32768, 64, 1] 2025-09-07T07:58:16.6946873Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T07:58:16.6947110Z cpp_CppMicroGemmAMX_4 0.1092 ms 100.0% 2025-09-07T07:58:16.6947333Z bmm 1.3782 ms 7.9% 2025-09-07T07:58:16.6947722Z SingleProcess AUTOTUNE benchmarking takes 0.4125 seconds and 1.4843 seconds precompiling for 2 choices 2025-09-07T07:58:18.8363565Z Autotune Choices Stats: 2025-09-07T07:58:18.8364072Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 1.9186019999324344} 2025-09-07T07:58:18.8376318Z AUTOTUNE linear_unary(512x1536, 6144x1536, 6144) 2025-09-07T07:58:18.8376622Z strides: [1536, 1], [1, 0], [1] 2025-09-07T07:58:18.8376885Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:58:18.8377164Z _linear_pointwise 1.9186 ms 100.0% 2025-09-07T07:58:18.8377406Z cpp_CppMicroGemmAMX_6 2.2672 ms 84.6% 2025-09-07T07:58:18.8377833Z SingleProcess AUTOTUNE benchmarking takes 0.3494 seconds and 1.5235 seconds precompiling for 2 choices 2025-09-07T07:58:20.6858569Z Autotune Choices Stats: 2025-09-07T07:58:20.6859263Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_7", "best_time": 0.6631820001530286} 2025-09-07T07:58:20.6871546Z AUTOTUNE linear_unary(512x6144, 1536x6144, 1536) 2025-09-07T07:58:20.6872535Z strides: [6144, 1], [1, 0], [1] 2025-09-07T07:58:20.6872877Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:58:20.6873171Z cpp_CppMicroGemmAMX_7 0.6632 ms 100.0% 2025-09-07T07:58:20.6873454Z _linear_pointwise 0.8047 ms 82.4% 2025-09-07T07:58:20.6873863Z SingleProcess AUTOTUNE benchmarking takes 0.3430 seconds and 1.4302 seconds precompiling for 2 choices 2025-09-07T07:58:38.8521765Z Autotune Choices Stats: 2025-09-07T07:58:38.8522233Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_192", "best_time": 0.00750599974708166} 2025-09-07T07:58:38.8530383Z AUTOTUNE linear_unary(512x1536, 2x1536, 2) 2025-09-07T07:58:38.8530689Z strides: [1536, 1], [1, 0], [1] 2025-09-07T07:58:38.8531101Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:58:38.8531381Z cpp_CppMicroGemmAMX_192 0.0075 ms 100.0% 2025-09-07T07:58:38.8531619Z _linear_pointwise 0.0390 ms 19.3% 2025-09-07T07:58:38.8531997Z SingleProcess AUTOTUNE benchmarking takes 0.2676 seconds and 1.4522 seconds precompiling for 2 choices 2025-09-07T07:58:40.0470126Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0470668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0471099Z return mod(**inputs) 2025-09-07T07:58:40.0471584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1244, in forward 2025-09-07T07:58:40.0472068Z logits = self.qa_outputs(sequence_output) 2025-09-07T07:58:40.0472235Z 2025-09-07T07:58:40.0472337Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0473065Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0473323Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0473740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0474114Z return mod(**inputs) 2025-09-07T07:58:40.0474558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0475012Z outputs = self.deberta( 2025-09-07T07:58:40.0475572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0476029Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0476488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0476955Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0477373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0477801Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0478260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0478746Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0479247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0479669Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0480089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0480656Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0481268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0481820Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0482029Z 2025-09-07T07:58:40.0482150Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0482559Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0482939Z return mod(**inputs) 2025-09-07T07:58:40.0483367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0483817Z outputs = self.deberta( 2025-09-07T07:58:40.0484242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0484693Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0485143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0485677Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0486070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0486460Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0487089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0487608Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0488090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0488534Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0489000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0489701Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0489989Z 2025-09-07T07:58:40.0490113Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0490506Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0490852Z return mod(**inputs) 2025-09-07T07:58:40.0491341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0491779Z outputs = self.deberta( 2025-09-07T07:58:40.0492198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0492630Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0493057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0493579Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0493979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0494368Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0494803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0495455Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0495916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0496355Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0496785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0497366Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0497650Z 2025-09-07T07:58:40.0497741Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0498001Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0498391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0498743Z return mod(**inputs) 2025-09-07T07:58:40.0499168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0499577Z outputs = self.deberta( 2025-09-07T07:58:40.0499970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0500378Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0500782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0501202Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0501575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0501952Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0502387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0502838Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0503285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0503758Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0504173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0504752Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0505322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0505822Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0506021Z 2025-09-07T07:58:40.0506131Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0506528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0506862Z return mod(**inputs) 2025-09-07T07:58:40.0507250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0507665Z outputs = self.deberta( 2025-09-07T07:58:40.0508056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0508472Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0508883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0509304Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0509681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0510074Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0510513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0510970Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0511408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0511826Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0512233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0512646Z context_layer = torch.bmm( 2025-09-07T07:58:40.0512769Z 2025-09-07T07:58:40.0512883Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0513244Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0513580Z return mod(**inputs) 2025-09-07T07:58:40.0513975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0514385Z outputs = self.deberta( 2025-09-07T07:58:40.0514923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0515340Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0515773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0516225Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0516639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0517032Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0517484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0517951Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0518423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0518885Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0519325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0519926Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0520194Z 2025-09-07T07:58:40.0520286Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0520520Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0520741Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0520965Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0521188Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0521462Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0521717Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0522098Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0522449Z return mod(**inputs) 2025-09-07T07:58:40.0522867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0523309Z outputs = self.deberta( 2025-09-07T07:58:40.0523716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0524159Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0524589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0525043Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0525447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0525833Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0526274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0526727Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0527281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0527746Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0528195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0528731Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0529287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0529808Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0530010Z 2025-09-07T07:58:40.0530135Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0530509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0530846Z return mod(**inputs) 2025-09-07T07:58:40.0531244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0531657Z outputs = self.deberta( 2025-09-07T07:58:40.0532041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0532458Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0532866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0533298Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0533664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0534019Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0534477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0534946Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0535407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0535872Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0536313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0536875Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0537151Z 2025-09-07T07:58:40.0537261Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0537630Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0537967Z return mod(**inputs) 2025-09-07T07:58:40.0538353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0538770Z outputs = self.deberta( 2025-09-07T07:58:40.0539166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0539583Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0539987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0540421Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0540800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0541200Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0541645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0542105Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0542575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0543016Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0543432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0543992Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0544261Z 2025-09-07T07:58:40.0544345Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0544594Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0544963Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0545749Z return mod(**inputs) 2025-09-07T07:58:40.0546195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0546626Z outputs = self.deberta( 2025-09-07T07:58:40.0547051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0547487Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0547900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0548324Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0548697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0549070Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0549486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0550005Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0550446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0550893Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0551305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0551885Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0552454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0552951Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0553150Z 2025-09-07T07:58:40.0553258Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0553625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0553945Z return mod(**inputs) 2025-09-07T07:58:40.0554326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0554714Z outputs = self.deberta( 2025-09-07T07:58:40.0555103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0555505Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0555902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0556315Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0556682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0557056Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0557467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0557892Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0558313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0558726Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0559133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0559540Z context_layer = torch.bmm( 2025-09-07T07:58:40.0559661Z 2025-09-07T07:58:40.0559777Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0560137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0560473Z return mod(**inputs) 2025-09-07T07:58:40.0560867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0561277Z outputs = self.deberta( 2025-09-07T07:58:40.0561671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0562074Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0562482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0562904Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0563277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0563635Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0564079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0564506Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0564933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0565346Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0565777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0566310Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0566577Z 2025-09-07T07:58:40.0566667Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0566982Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0567216Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0567444Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0567676Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0567902Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0568159Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0568534Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0568866Z return mod(**inputs) 2025-09-07T07:58:40.0569266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0569682Z outputs = self.deberta( 2025-09-07T07:58:40.0570073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0570475Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0570879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0571309Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0571688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0572047Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0572460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0572891Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0573322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0573741Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0574146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0574680Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0575246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0575754Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0575944Z 2025-09-07T07:58:40.0576061Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0576425Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0576762Z return mod(**inputs) 2025-09-07T07:58:40.0577157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0577569Z outputs = self.deberta( 2025-09-07T07:58:40.0577966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0578432Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0578842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0579268Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0579648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0580011Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0580487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0580911Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0581334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0581733Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0582133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0582669Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0582937Z 2025-09-07T07:58:40.0583043Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0583404Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0583734Z return mod(**inputs) 2025-09-07T07:58:40.0584109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0584509Z outputs = self.deberta( 2025-09-07T07:58:40.0584891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0585293Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0585687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0586092Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0586459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0586817Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0587221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0587637Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0588048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0588451Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0588851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0589385Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0589644Z 2025-09-07T07:58:40.0589732Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0589965Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0590323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0590654Z return mod(**inputs) 2025-09-07T07:58:40.0591034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0591426Z outputs = self.deberta( 2025-09-07T07:58:40.0591808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0592248Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0592646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0593056Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0593417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0593778Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0594214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0594633Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0595042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0595456Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0595861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0596378Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0596930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0597423Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0597613Z 2025-09-07T07:58:40.0597720Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0598081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0598407Z return mod(**inputs) 2025-09-07T07:58:40.0598791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0599191Z outputs = self.deberta( 2025-09-07T07:58:40.0599565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0599966Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0600365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0600792Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0601165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0601538Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0601963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0602397Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0602848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0603298Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0603735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0604167Z context_layer = torch.bmm( 2025-09-07T07:58:40.0604296Z 2025-09-07T07:58:40.0604416Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0604805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0605168Z return mod(**inputs) 2025-09-07T07:58:40.0605596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0606033Z outputs = self.deberta( 2025-09-07T07:58:40.0606512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0607088Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0607531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0608004Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0608376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0608776Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0609178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0609578Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0610017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0610460Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0610902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0611460Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0611734Z 2025-09-07T07:58:40.0611818Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0612038Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0612254Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0612463Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0612673Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0612878Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0613114Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0613478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0613808Z return mod(**inputs) 2025-09-07T07:58:40.0614195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0614596Z outputs = self.deberta( 2025-09-07T07:58:40.0614988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0615406Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0615808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0616239Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0616629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0616991Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0617397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0617834Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0618265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0618685Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0619098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0619630Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0620192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0620703Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0620901Z 2025-09-07T07:58:40.0621044Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0621417Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0621743Z return mod(**inputs) 2025-09-07T07:58:40.0622141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0622553Z outputs = self.deberta( 2025-09-07T07:58:40.0622980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0623394Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0623792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0624217Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0624594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0624972Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0625414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0625861Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0626316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0626748Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0627161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0627711Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0627980Z 2025-09-07T07:58:40.0628090Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0628459Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0628794Z return mod(**inputs) 2025-09-07T07:58:40.0629188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0629594Z outputs = self.deberta( 2025-09-07T07:58:40.0629985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0630408Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0630805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0631218Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0631578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0631942Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0632343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0632758Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0633173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0633571Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0633976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0634505Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0634757Z 2025-09-07T07:58:40.0634842Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0635077Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0635473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0635799Z return mod(**inputs) 2025-09-07T07:58:40.0636188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0636583Z outputs = self.deberta( 2025-09-07T07:58:40.0636976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0637367Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0637750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0638155Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0638518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0638875Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0639280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0639697Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0640118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0640528Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0640924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0641440Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0641995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0642494Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0642679Z 2025-09-07T07:58:40.0642794Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0643152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0643488Z return mod(**inputs) 2025-09-07T07:58:40.0643887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0644300Z outputs = self.deberta( 2025-09-07T07:58:40.0644688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0645295Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0645710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0646145Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0646525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0646941Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0647362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0647795Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0648232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0648650Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0649058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0649472Z context_layer = torch.bmm( 2025-09-07T07:58:40.0649706Z 2025-09-07T07:58:40.0649815Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0650190Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0650536Z return mod(**inputs) 2025-09-07T07:58:40.0650927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0651345Z outputs = self.deberta( 2025-09-07T07:58:40.0651788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0652214Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0652615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0653045Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0653424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0653845Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0654263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0654688Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0655123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0655543Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0655959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0656488Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0656734Z 2025-09-07T07:58:40.0656821Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0657049Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0657265Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0657475Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0657678Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0657892Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0658118Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0658463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0658777Z return mod(**inputs) 2025-09-07T07:58:40.0659150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0659547Z outputs = self.deberta( 2025-09-07T07:58:40.0659920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0660311Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0660702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0661117Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0661485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0661849Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0662257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0662675Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0663102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0663523Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0663937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0664512Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0665056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0665543Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0665729Z 2025-09-07T07:58:40.0665862Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0666214Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0666529Z return mod(**inputs) 2025-09-07T07:58:40.0666893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0667287Z outputs = self.deberta( 2025-09-07T07:58:40.0667656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0668044Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0668420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0668819Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0669182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0669534Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0669942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0670348Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0670757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0671154Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0671548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0672067Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0672324Z 2025-09-07T07:58:40.0672428Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0672777Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0673093Z return mod(**inputs) 2025-09-07T07:58:40.0673465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0673856Z outputs = self.deberta( 2025-09-07T07:58:40.0674220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0674606Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0674986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0675386Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0675738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0676090Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0676482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0676888Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0677297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0677718Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0678112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0678627Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0678883Z 2025-09-07T07:58:40.0678971Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0679245Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0679589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0679908Z return mod(**inputs) 2025-09-07T07:58:40.0680282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0680682Z outputs = self.deberta( 2025-09-07T07:58:40.0681063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0681472Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0681866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0682292Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0682672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0683037Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0683454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0683886Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0684317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0684744Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0685147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0685680Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0686251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0686823Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0687033Z 2025-09-07T07:58:40.0687159Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0687523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0687862Z return mod(**inputs) 2025-09-07T07:58:40.0688267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0688685Z outputs = self.deberta( 2025-09-07T07:58:40.0689087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0689493Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0689906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0690336Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0690714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0691077Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0691493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0691967Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0692405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0692821Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0693227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0693641Z context_layer = torch.bmm( 2025-09-07T07:58:40.0693807Z 2025-09-07T07:58:40.0693916Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0694305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0694651Z return mod(**inputs) 2025-09-07T07:58:40.0695047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0695466Z outputs = self.deberta( 2025-09-07T07:58:40.0695867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0696306Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0696701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0697125Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0697513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0697884Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0698295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0698714Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0699149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0699567Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0699980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0700526Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0700772Z 2025-09-07T07:58:40.0700861Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0701086Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0701304Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0701521Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0701728Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0701943Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0702182Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0702560Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0702895Z return mod(**inputs) 2025-09-07T07:58:40.0703278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0703687Z outputs = self.deberta( 2025-09-07T07:58:40.0704077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0704495Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0704893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0705320Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0705696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0706104Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0706509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0706920Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0707346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0707739Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0708159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0708659Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0709190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0709686Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0709876Z 2025-09-07T07:58:40.0709983Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0710353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0710670Z return mod(**inputs) 2025-09-07T07:58:40.0711036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0711426Z outputs = self.deberta( 2025-09-07T07:58:40.0711801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0712193Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0712575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0712992Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0713353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0713706Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0714100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0714501Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0714920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0715314Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0715707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0716227Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0716487Z 2025-09-07T07:58:40.0716589Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0716950Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0717271Z return mod(**inputs) 2025-09-07T07:58:40.0717646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0718035Z outputs = self.deberta( 2025-09-07T07:58:40.0718404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0718796Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0719179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0719583Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0719975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0720329Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0720722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0721138Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0721600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0722003Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0722407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0722945Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0723214Z 2025-09-07T07:58:40.0723297Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0723539Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0723891Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0724218Z return mod(**inputs) 2025-09-07T07:58:40.0724607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0725013Z outputs = self.deberta( 2025-09-07T07:58:40.0725406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0725818Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0726211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0726622Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0727078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0727438Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0727845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0728269Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0728694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0729110Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0729514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0730047Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0730610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0731105Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0731291Z 2025-09-07T07:58:40.0731406Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0731757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0732083Z return mod(**inputs) 2025-09-07T07:58:40.0732469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0732872Z outputs = self.deberta( 2025-09-07T07:58:40.0733252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0733652Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0734096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0734516Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0734888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0735251Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0735693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0736122Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0736545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0736969Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0737396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0737809Z context_layer = torch.bmm( 2025-09-07T07:58:40.0737934Z 2025-09-07T07:58:40.0738039Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0738404Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0738734Z return mod(**inputs) 2025-09-07T07:58:40.0739120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0739537Z outputs = self.deberta( 2025-09-07T07:58:40.0739937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0740337Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0740742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0741176Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0741547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0741918Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0742337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0742766Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0743200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0743618Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0744097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0744634Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0744884Z 2025-09-07T07:58:40.0744969Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0745350Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0745581Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0745802Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0746019Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0746226Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0746476Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0746850Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0747187Z return mod(**inputs) 2025-09-07T07:58:40.0747576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0747994Z outputs = self.deberta( 2025-09-07T07:58:40.0748462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0748879Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0749278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0749710Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0750135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0750515Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0750936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0751370Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0751790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0752193Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0752593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0753099Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0753637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0754139Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0754330Z 2025-09-07T07:58:40.0754438Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0754798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0755128Z return mod(**inputs) 2025-09-07T07:58:40.0755511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0755922Z outputs = self.deberta( 2025-09-07T07:58:40.0756312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0756736Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0757133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0757538Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0757907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0758268Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0758668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0759084Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0759500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0759901Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0760309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0760854Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0761122Z 2025-09-07T07:58:40.0761239Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0761604Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0761937Z return mod(**inputs) 2025-09-07T07:58:40.0762331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0762788Z outputs = self.deberta( 2025-09-07T07:58:40.0763161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0763562Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0763954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0764390Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0764767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0765121Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0765539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0765973Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0766409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0766883Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0767307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0767866Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0768160Z 2025-09-07T07:58:40.0768265Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0768510Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0768868Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0769209Z return mod(**inputs) 2025-09-07T07:58:40.0769609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0770027Z outputs = self.deberta( 2025-09-07T07:58:40.0770429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0770841Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0771251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0771697Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0772078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0772451Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0772857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0773293Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0773725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0774141Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0774549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0775082Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0775646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0791895Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0792116Z 2025-09-07T07:58:40.0792232Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0792606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0793074Z return mod(**inputs) 2025-09-07T07:58:40.0793468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0793883Z outputs = self.deberta( 2025-09-07T07:58:40.0794287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0794767Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0795184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0795603Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0795982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0796344Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0796762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0797195Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0797616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0798029Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0798443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0798862Z context_layer = torch.bmm( 2025-09-07T07:58:40.0798984Z 2025-09-07T07:58:40.0799099Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0799469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0799806Z return mod(**inputs) 2025-09-07T07:58:40.0800201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0800609Z outputs = self.deberta( 2025-09-07T07:58:40.0800989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0801393Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0801795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0802212Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0802580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0802937Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0803344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0803773Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0804205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0804611Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0805029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0805565Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0805818Z 2025-09-07T07:58:40.0805908Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0806134Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0806345Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0806560Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0806850Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0807114Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0807357Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0807743Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0808094Z return mod(**inputs) 2025-09-07T07:58:40.0808494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0808908Z outputs = self.deberta( 2025-09-07T07:58:40.0809327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0809730Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0810129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0810544Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0810909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0811275Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0811693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0812107Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0812523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0812914Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0813309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0813818Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0814363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0814849Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0815031Z 2025-09-07T07:58:40.0815137Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0815496Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0815814Z return mod(**inputs) 2025-09-07T07:58:40.0816197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0816592Z outputs = self.deberta( 2025-09-07T07:58:40.0816960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0817357Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0817749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0818151Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0818510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0818853Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0819255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0819673Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0820096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0820497Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0820900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0821472Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0821744Z 2025-09-07T07:58:40.0821850Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0822212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0822530Z return mod(**inputs) 2025-09-07T07:58:40.0822942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0823348Z outputs = self.deberta( 2025-09-07T07:58:40.0823719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0824112Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0824488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0824893Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0825252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0825601Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0825992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0826391Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0826803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0827193Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0827581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0828091Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0828354Z 2025-09-07T07:58:40.0828436Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0828675Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0829032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0829358Z return mod(**inputs) 2025-09-07T07:58:40.0829737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0830145Z outputs = self.deberta( 2025-09-07T07:58:40.0830543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0830943Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0831336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0831754Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0832112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0832457Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0832851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0833264Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0833672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0834073Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0834473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0835020Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0835568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0836055Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0836249Z 2025-09-07T07:58:40.0836356Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0836751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0837077Z return mod(**inputs) 2025-09-07T07:58:40.0837461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0837852Z outputs = self.deberta( 2025-09-07T07:58:40.0838230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0838631Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0839023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0839428Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0839792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0840148Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0840548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0840961Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0841366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0841767Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0842167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0842564Z context_layer = torch.bmm( 2025-09-07T07:58:40.0842680Z 2025-09-07T07:58:40.0842792Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0843142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0843469Z return mod(**inputs) 2025-09-07T07:58:40.0843844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0844239Z outputs = self.deberta( 2025-09-07T07:58:40.0844610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0845164Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0845583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0846007Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0846378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0846734Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0847214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0847679Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0848128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0848530Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0848934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0849532Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0849776Z 2025-09-07T07:58:40.0849861Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0850083Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0850300Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0850502Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0850760Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0850962Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0851191Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0851536Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0851850Z return mod(**inputs) 2025-09-07T07:58:40.0852223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0852615Z outputs = self.deberta( 2025-09-07T07:58:40.0852978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0853368Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0853753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0854159Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0854524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0854870Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0855263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0855665Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0856072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0856460Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0856839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0857334Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0857867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0858346Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0858526Z 2025-09-07T07:58:40.0858635Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0858977Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0859299Z return mod(**inputs) 2025-09-07T07:58:40.0859671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0860056Z outputs = self.deberta( 2025-09-07T07:58:40.0860426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0860813Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0861197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0861598Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0861974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0862318Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0862737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0863146Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0863553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0863936Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0864337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0864846Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0865106Z 2025-09-07T07:58:40.0865209Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0865560Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0865879Z return mod(**inputs) 2025-09-07T07:58:40.0866243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0866632Z outputs = self.deberta( 2025-09-07T07:58:40.0867008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0867386Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0867757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0868148Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0868495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0868834Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0869216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0869604Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0870002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0870389Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0870790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0871300Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0871544Z 2025-09-07T07:58:40.0871623Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0871855Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0872193Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0872503Z return mod(**inputs) 2025-09-07T07:58:40.0872865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0873242Z outputs = self.deberta( 2025-09-07T07:58:40.0873604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0873978Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0874352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0874732Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0875076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0875418Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0875797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0876226Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0876616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0877001Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0877409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0877902Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0878427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0878890Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0879076Z 2025-09-07T07:58:40.0879177Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0879529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0879848Z return mod(**inputs) 2025-09-07T07:58:40.0880230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0880667Z outputs = self.deberta( 2025-09-07T07:58:40.0881060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0881471Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0881861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0882268Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0882622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0882975Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0883379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0883797Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0884214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0884623Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0885031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0885441Z context_layer = torch.bmm( 2025-09-07T07:58:40.0885562Z 2025-09-07T07:58:40.0885679Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0886054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0886424Z return mod(**inputs) 2025-09-07T07:58:40.0886921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0887366Z outputs = self.deberta( 2025-09-07T07:58:40.0887784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0888217Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0888604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0889006Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0889364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0889712Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0890157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0890560Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0890964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0891354Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0891766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0892261Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0892498Z 2025-09-07T07:58:40.0892579Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0892793Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0893002Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0893201Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0893401Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0893603Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0893831Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0894174Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0894488Z return mod(**inputs) 2025-09-07T07:58:40.0894864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0895250Z outputs = self.deberta( 2025-09-07T07:58:40.0895616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0896001Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0896383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0896789Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0897145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0897484Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0897877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0898285Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0898687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0899075Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0899458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0899962Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0900490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0900964Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0901144Z 2025-09-07T07:58:40.0901256Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0901599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0901942Z return mod(**inputs) 2025-09-07T07:58:40.0902313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0902702Z outputs = self.deberta( 2025-09-07T07:58:40.0903065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0903481Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0903864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0904265Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0904618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0904988Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0905389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0905780Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0906174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0906557Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0906924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0907427Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0907684Z 2025-09-07T07:58:40.0907785Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0908137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0908450Z return mod(**inputs) 2025-09-07T07:58:40.0908817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0909214Z outputs = self.deberta( 2025-09-07T07:58:40.0909590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0910005Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0910386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0910778Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0911131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0911484Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0911876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0912271Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0912679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0913069Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0913461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0913980Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0914230Z 2025-09-07T07:58:40.0914309Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0914546Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0914905Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0915220Z return mod(**inputs) 2025-09-07T07:58:40.0915593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0915973Z outputs = self.deberta( 2025-09-07T07:58:40.0916346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0916773Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0917152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0917543Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0917897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0918242Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0918661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0919071Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0919468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0919861Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0920252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0920806Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0921349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0921838Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0922031Z 2025-09-07T07:58:40.0922417Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0922786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0923115Z return mod(**inputs) 2025-09-07T07:58:40.0923508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0923920Z outputs = self.deberta( 2025-09-07T07:58:40.0924307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0924717Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0925124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0925547Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0925917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0926287Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0926700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0927203Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0927663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0928111Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0928548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0928948Z context_layer = torch.bmm( 2025-09-07T07:58:40.0929067Z 2025-09-07T07:58:40.0929181Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0929530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0929853Z return mod(**inputs) 2025-09-07T07:58:40.0930233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0930639Z outputs = self.deberta( 2025-09-07T07:58:40.0931029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0931475Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0931883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0932323Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0932689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0933781Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0934201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0934632Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0935062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0935511Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0935951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0936480Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0936732Z 2025-09-07T07:58:40.0936817Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0937039Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0937257Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0937462Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0937672Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0937882Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0938119Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0938477Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0938810Z return mod(**inputs) 2025-09-07T07:58:40.0939204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0939613Z outputs = self.deberta( 2025-09-07T07:58:40.0940004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0940402Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0940810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0941230Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0941600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0941957Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0942368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0942799Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0943231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0943644Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0944057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0944586Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0945312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0945817Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0946077Z 2025-09-07T07:58:40.0946191Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0946541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0946866Z return mod(**inputs) 2025-09-07T07:58:40.0947256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0947648Z outputs = self.deberta( 2025-09-07T07:58:40.0948059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0948452Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0948846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0949257Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0949627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0949986Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0950397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0950818Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0951254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0951656Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0952048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0952577Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0952840Z 2025-09-07T07:58:40.0952942Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0953296Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0953615Z return mod(**inputs) 2025-09-07T07:58:40.0953982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0954375Z outputs = self.deberta( 2025-09-07T07:58:40.0954760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0955154Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0955544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0955944Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0956307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0956666Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0957067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0957479Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0957888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0958288Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0958685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0959208Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0959459Z 2025-09-07T07:58:40.0959545Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0959778Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0960168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0960489Z return mod(**inputs) 2025-09-07T07:58:40.0960865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0961257Z outputs = self.deberta( 2025-09-07T07:58:40.0961663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0962066Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0962462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0962873Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0963232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0963596Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0964006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0964432Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0964856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0965264Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0965668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.0966186Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.0966739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0967315Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0967503Z 2025-09-07T07:58:40.0967613Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0967980Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0968321Z return mod(**inputs) 2025-09-07T07:58:40.0968707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0969104Z outputs = self.deberta( 2025-09-07T07:58:40.0969504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0969921Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0970334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0970769Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0971141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0971518Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0971922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0972343Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0972765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0973161Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0973565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.0973966Z context_layer = torch.bmm( 2025-09-07T07:58:40.0974119Z 2025-09-07T07:58:40.0974231Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0974577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0974899Z return mod(**inputs) 2025-09-07T07:58:40.0975282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0975678Z outputs = self.deberta( 2025-09-07T07:58:40.0976082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0976473Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0976868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0977277Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0977642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0977996Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0978391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0978802Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0979221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0979609Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0980020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.0980596Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.0980860Z 2025-09-07T07:58:40.0980952Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0981187Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0981417Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0981632Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0981835Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0982035Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.0982262Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0982607Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0982920Z return mod(**inputs) 2025-09-07T07:58:40.0983289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0983677Z outputs = self.deberta( 2025-09-07T07:58:40.0984051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0984439Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0984827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0985233Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0985594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0985948Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0986351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0986772Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0987182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0987569Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0987985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.0988483Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.0989014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.0989490Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.0989714Z 2025-09-07T07:58:40.0989828Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0990187Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0990525Z return mod(**inputs) 2025-09-07T07:58:40.0990919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0991337Z outputs = self.deberta( 2025-09-07T07:58:40.0991731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0992145Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0992542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0992956Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.0993329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.0993693Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.0994093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.0994502Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.0994913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.0995314Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.0995706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.0996245Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.0996519Z 2025-09-07T07:58:40.0996630Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.0997003Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.0997319Z return mod(**inputs) 2025-09-07T07:58:40.0997688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.0998083Z outputs = self.deberta( 2025-09-07T07:58:40.0998471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.0998871Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.0999267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.0999676Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1000046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1000414Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1000819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1001238Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1001650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1002094Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1002495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1003101Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1003359Z 2025-09-07T07:58:40.1003452Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1003715Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1004079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1004406Z return mod(**inputs) 2025-09-07T07:58:40.1004795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1005193Z outputs = self.deberta( 2025-09-07T07:58:40.1005580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1005979Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1006377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1006860Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1007240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1007614Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1008063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1008501Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1008920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1009323Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1009734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1010255Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1010815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1011311Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1011497Z 2025-09-07T07:58:40.1011602Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1011960Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1012284Z return mod(**inputs) 2025-09-07T07:58:40.1012670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1013067Z outputs = self.deberta( 2025-09-07T07:58:40.1013446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1013842Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1014240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1014652Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1015013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1015375Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1015757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1016186Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1016582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1016957Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1017350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1017769Z context_layer = torch.bmm( 2025-09-07T07:58:40.1017886Z 2025-09-07T07:58:40.1018000Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1018358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1018427Z return mod(**inputs) 2025-09-07T07:58:40.1018695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1018777Z outputs = self.deberta( 2025-09-07T07:58:40.1019043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1019125Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1019380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1019469Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1019694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1019775Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1020044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1020136Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1020406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1020487Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1020747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1020937Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1020941Z 2025-09-07T07:58:40.1021027Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1021126Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1021201Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1021275Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1021358Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1021433Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1021543Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1021734Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1021800Z return mod(**inputs) 2025-09-07T07:58:40.1022064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1022133Z outputs = self.deberta( 2025-09-07T07:58:40.1022403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1022479Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1022738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1022833Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1023046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1023164Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1023433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1023528Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1023778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1023851Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1024133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1024314Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1024607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1024733Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1024736Z 2025-09-07T07:58:40.1024844Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1025032Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1025096Z return mod(**inputs) 2025-09-07T07:58:40.1025362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1025429Z outputs = self.deberta( 2025-09-07T07:58:40.1025688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1025758Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1026010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1026104Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1026309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1026392Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1026640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1026728Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1026987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1027062Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1027321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1027524Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1027530Z 2025-09-07T07:58:40.1027636Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1027827Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1027892Z return mod(**inputs) 2025-09-07T07:58:40.1028162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1028233Z outputs = self.deberta( 2025-09-07T07:58:40.1028496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1028576Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1028825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1028916Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1029151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1029235Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1029483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1029577Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1029860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1029934Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1030191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1030391Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1030397Z 2025-09-07T07:58:40.1030480Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1030580Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1030776Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1030840Z return mod(**inputs) 2025-09-07T07:58:40.1031103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1031181Z outputs = self.deberta( 2025-09-07T07:58:40.1031437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1031514Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1031769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1031854Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1032089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1032165Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1032422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1032510Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1032761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1032843Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1033095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1033286Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1033574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1033704Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1033707Z 2025-09-07T07:58:40.1033807Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1033992Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1034066Z return mod(**inputs) 2025-09-07T07:58:40.1034322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1034398Z outputs = self.deberta( 2025-09-07T07:58:40.1034654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1034729Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1035008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1035092Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1035305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1035380Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1035666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1035757Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1036051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1036134Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1036393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1036472Z context_layer = torch.bmm( 2025-09-07T07:58:40.1036476Z 2025-09-07T07:58:40.1036576Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1036774Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1036838Z return mod(**inputs) 2025-09-07T07:58:40.1037102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1037175Z outputs = self.deberta( 2025-09-07T07:58:40.1037444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1037520Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1037770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1037855Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1038070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1038145Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1038403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1038495Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1038758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1038840Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1039100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1039288Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1039295Z 2025-09-07T07:58:40.1039373Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1039456Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1039532Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1039606Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1039689Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1039763Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1039874Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1040065Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1040130Z return mod(**inputs) 2025-09-07T07:58:40.1040400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1040468Z outputs = self.deberta( 2025-09-07T07:58:40.1040758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1040830Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1041091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1041184Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1041428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1041516Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1041780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1041871Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1042149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1042228Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1042498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1042688Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1042997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1043128Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1043132Z 2025-09-07T07:58:40.1043233Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1043434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1043499Z return mod(**inputs) 2025-09-07T07:58:40.1043779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1043848Z outputs = self.deberta( 2025-09-07T07:58:40.1044121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1044194Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1044461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1044555Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1044775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1044860Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1045304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1045412Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1045696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1045774Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1046057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1046273Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1046277Z 2025-09-07T07:58:40.1046392Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1046597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1046665Z return mod(**inputs) 2025-09-07T07:58:40.1047125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1047277Z outputs = self.deberta( 2025-09-07T07:58:40.1047555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1047634Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1047921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1048060Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1048303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1048390Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1048646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1048747Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1049002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1049080Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1049342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1049543Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1049546Z 2025-09-07T07:58:40.1049633Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1049736Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1049927Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1050000Z return mod(**inputs) 2025-09-07T07:58:40.1050260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1050337Z outputs = self.deberta( 2025-09-07T07:58:40.1050591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1050671Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1050927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1051015Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1051233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1051310Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1051569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1051660Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1051923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1052006Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1052260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1052454Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1052746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1052878Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1052882Z 2025-09-07T07:58:40.1052982Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1053211Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1053283Z return mod(**inputs) 2025-09-07T07:58:40.1053543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1053618Z outputs = self.deberta( 2025-09-07T07:58:40.1053876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1053975Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1054239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1054322Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1054538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1054619Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1054889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1054978Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1055239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1055321Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1055585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1055663Z context_layer = torch.bmm( 2025-09-07T07:58:40.1055667Z 2025-09-07T07:58:40.1055765Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1055953Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1056026Z return mod(**inputs) 2025-09-07T07:58:40.1056296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1056368Z outputs = self.deberta( 2025-09-07T07:58:40.1056625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1056701Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1056966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1057050Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1057272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1057358Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1057618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1057707Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1057962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1058041Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1058297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1058483Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1058486Z 2025-09-07T07:58:40.1058561Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1058640Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1058711Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1058782Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1058859Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1058960Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1059068Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1059257Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1059320Z return mod(**inputs) 2025-09-07T07:58:40.1059587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1059754Z outputs = self.deberta( 2025-09-07T07:58:40.1060018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1060089Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1060344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1060434Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1060648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1060733Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1061000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1061091Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1061363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1061442Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1061719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1061912Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1062228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1062353Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1062356Z 2025-09-07T07:58:40.1062456Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1062654Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1062719Z return mod(**inputs) 2025-09-07T07:58:40.1062991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1063059Z outputs = self.deberta( 2025-09-07T07:58:40.1063315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1063394Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1063653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1063745Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1063956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1064041Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1064299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1064390Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1064655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1064730Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1064994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1065229Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1065232Z 2025-09-07T07:58:40.1065340Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1065531Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1065595Z return mod(**inputs) 2025-09-07T07:58:40.1065890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1065960Z outputs = self.deberta( 2025-09-07T07:58:40.1066226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1066299Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1066559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1066656Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1066878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1066963Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1067220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1067311Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1067574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1067646Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1067907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1068106Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1068110Z 2025-09-07T07:58:40.1068194Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1068295Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1068484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1068556Z return mod(**inputs) 2025-09-07T07:58:40.1068816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1068892Z outputs = self.deberta( 2025-09-07T07:58:40.1069151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1069226Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1069502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1069592Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1069832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1069913Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1070186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1070278Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1070546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1070631Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1070899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1071149Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1071453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1071588Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1071591Z 2025-09-07T07:58:40.1071694Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1071918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1071997Z return mod(**inputs) 2025-09-07T07:58:40.1072268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1072346Z outputs = self.deberta( 2025-09-07T07:58:40.1072616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1072690Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1072966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1073052Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1073276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1073357Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1073631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1073722Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1073989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1074074Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1074341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1074420Z context_layer = torch.bmm( 2025-09-07T07:58:40.1074423Z 2025-09-07T07:58:40.1074524Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1074718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1074794Z return mod(**inputs) 2025-09-07T07:58:40.1075077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1075152Z outputs = self.deberta( 2025-09-07T07:58:40.1075419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1075494Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1075768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1075853Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1076079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1076158Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1076432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1076523Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1076791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1076877Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1077143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1077373Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1077377Z 2025-09-07T07:58:40.1077458Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1077545Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1077621Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1077697Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1077824Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1077903Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1078010Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1078217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1078285Z return mod(**inputs) 2025-09-07T07:58:40.1078571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1078644Z outputs = self.deberta( 2025-09-07T07:58:40.1078917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1078990Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1079252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1079351Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1079567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1079653Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1079915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1080009Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1080278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1080356Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1080625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1080814Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1081120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1081251Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1081254Z 2025-09-07T07:58:40.1081357Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1081559Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1081629Z return mod(**inputs) 2025-09-07T07:58:40.1081903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1081972Z outputs = self.deberta( 2025-09-07T07:58:40.1082243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1082329Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1082600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1082697Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1082921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1083010Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1083311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1083406Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1083681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1083759Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1084064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1084277Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1084280Z 2025-09-07T07:58:40.1084394Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1084595Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1084667Z return mod(**inputs) 2025-09-07T07:58:40.1084950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1085021Z outputs = self.deberta( 2025-09-07T07:58:40.1085298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1085372Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1085669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1085771Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1086004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1086099Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1086390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1086483Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1086826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1086917Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1087203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1087418Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1087423Z 2025-09-07T07:58:40.1087518Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1087632Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1087846Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1087930Z return mod(**inputs) 2025-09-07T07:58:40.1088224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1088305Z outputs = self.deberta( 2025-09-07T07:58:40.1088591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1088671Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1088974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1089063Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1089294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1089376Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1089689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1089785Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1090054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1090141Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1090441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1090642Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1090950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1091081Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1091095Z 2025-09-07T07:58:40.1091201Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1091402Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1091479Z return mod(**inputs) 2025-09-07T07:58:40.1091756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1091835Z outputs = self.deberta( 2025-09-07T07:58:40.1092111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1092187Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1092466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1092555Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1092789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1092872Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1093142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1093245Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1093517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1093607Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1093880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1093961Z context_layer = torch.bmm( 2025-09-07T07:58:40.1093965Z 2025-09-07T07:58:40.1094071Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1094274Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1094350Z return mod(**inputs) 2025-09-07T07:58:40.1094625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1094702Z outputs = self.deberta( 2025-09-07T07:58:40.1094971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1095049Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1095331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1095420Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1095654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1095773Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1096053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1096150Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1096424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1096512Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1096822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1097019Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1097023Z 2025-09-07T07:58:40.1097108Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1097188Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1097277Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1097355Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1097441Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1097518Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1097625Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1097834Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1097903Z return mod(**inputs) 2025-09-07T07:58:40.1098187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1098257Z outputs = self.deberta( 2025-09-07T07:58:40.1098529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1098610Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1098879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1098985Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1099197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1099282Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1099536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1099627Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1099891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1099966Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1100227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1100413Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1100716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1100844Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1100847Z 2025-09-07T07:58:40.1100950Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1101152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1101216Z return mod(**inputs) 2025-09-07T07:58:40.1101491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1101560Z outputs = self.deberta( 2025-09-07T07:58:40.1101826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1101942Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1102206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1102301Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1102519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1102625Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1102895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1102986Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1103261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1103341Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1103615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1103816Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1103820Z 2025-09-07T07:58:40.1103922Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1104119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1104186Z return mod(**inputs) 2025-09-07T07:58:40.1104452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1104521Z outputs = self.deberta( 2025-09-07T07:58:40.1104784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1104858Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1105115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1105206Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1105418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1105502Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1105757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1105845Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1106109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1106184Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1106447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1106646Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1106649Z 2025-09-07T07:58:40.1106735Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1106834Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1107029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1107100Z return mod(**inputs) 2025-09-07T07:58:40.1107360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1107433Z outputs = self.deberta( 2025-09-07T07:58:40.1107688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1107792Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1108058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1108142Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1108358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1108437Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1108721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1108819Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1109077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1109161Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1109418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1109611Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1109906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1110038Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1110050Z 2025-09-07T07:58:40.1110150Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1110343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1110416Z return mod(**inputs) 2025-09-07T07:58:40.1110680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1110756Z outputs = self.deberta( 2025-09-07T07:58:40.1111013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1111084Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1111349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1111432Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1111654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1111732Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1111986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1112082Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1112341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1112424Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1112681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1112754Z context_layer = torch.bmm( 2025-09-07T07:58:40.1112758Z 2025-09-07T07:58:40.1112858Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1113044Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1113114Z return mod(**inputs) 2025-09-07T07:58:40.1113374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1113444Z outputs = self.deberta( 2025-09-07T07:58:40.1113700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1113800Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1114064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1114148Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1114366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1114469Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1114733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1114822Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1115082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1115167Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1115429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1115616Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1115620Z 2025-09-07T07:58:40.1115698Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1115775Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1115863Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1115938Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1116018Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1116090Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1116191Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1116388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1116455Z return mod(**inputs) 2025-09-07T07:58:40.1116732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1116800Z outputs = self.deberta( 2025-09-07T07:58:40.1117062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1117142Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1117405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1117497Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1117711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1117795Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1118059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1118150Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1118422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1118495Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1118767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1118950Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1119251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1119386Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1119421Z 2025-09-07T07:58:40.1119523Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1119722Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1119787Z return mod(**inputs) 2025-09-07T07:58:40.1120058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1120124Z outputs = self.deberta( 2025-09-07T07:58:40.1120416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1120496Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1120754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1120844Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1121059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1121140Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1121415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1121506Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1121784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1121861Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1122134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1122341Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1122344Z 2025-09-07T07:58:40.1122452Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1122656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1122724Z return mod(**inputs) 2025-09-07T07:58:40.1122999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1123069Z outputs = self.deberta( 2025-09-07T07:58:40.1123339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1123419Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1123685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1123781Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1124000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1124088Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1124353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1124445Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1124719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1124801Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1125076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1125284Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1125288Z 2025-09-07T07:58:40.1125377Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1125516Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1125718Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1125795Z return mod(**inputs) 2025-09-07T07:58:40.1126083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1126162Z outputs = self.deberta( 2025-09-07T07:58:40.1126473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1126551Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1126904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1126998Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1127237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1127329Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1127624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1127736Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1128017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1128106Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1128377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1128576Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1128880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1129014Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1129018Z 2025-09-07T07:58:40.1129129Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1129323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1129403Z return mod(**inputs) 2025-09-07T07:58:40.1129678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1129757Z outputs = self.deberta( 2025-09-07T07:58:40.1130020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1130094Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1130368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1130457Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1130684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1130764Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1131026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1131129Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1131393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1131478Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1131749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1131858Z context_layer = torch.bmm( 2025-09-07T07:58:40.1131870Z 2025-09-07T07:58:40.1131971Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1132162Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1132235Z return mod(**inputs) 2025-09-07T07:58:40.1132504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1132580Z outputs = self.deberta( 2025-09-07T07:58:40.1132878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1132952Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1133221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1133307Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1133546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1133622Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1133878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1133973Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1134230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1134314Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1134571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1134757Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1134763Z 2025-09-07T07:58:40.1134840Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1134916Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1134995Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1135067Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1135146Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1135219Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1135318Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1135516Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1135581Z return mod(**inputs) 2025-09-07T07:58:40.1135847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1135914Z outputs = self.deberta( 2025-09-07T07:58:40.1136171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1136252Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1136506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1136598Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1136809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1136887Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1137150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1137239Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1137506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1137582Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1137884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1138083Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1138376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1138537Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1138540Z 2025-09-07T07:58:40.1138643Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1138840Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1138905Z return mod(**inputs) 2025-09-07T07:58:40.1139176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1139250Z outputs = self.deberta( 2025-09-07T07:58:40.1139517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1139598Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1139877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1139970Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1140186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1140264Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1140532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1140624Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1140897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1140973Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1141254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1141455Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1141459Z 2025-09-07T07:58:40.1141560Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1141758Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1141823Z return mod(**inputs) 2025-09-07T07:58:40.1142090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1142160Z outputs = self.deberta( 2025-09-07T07:58:40.1142419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1142497Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1142754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1142846Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1143059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1143144Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1143402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1143492Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1143756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1143878Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1144141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1144342Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1144345Z 2025-09-07T07:58:40.1144424Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1144557Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1144751Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1144821Z return mod(**inputs) 2025-09-07T07:58:40.1145296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1145378Z outputs = self.deberta( 2025-09-07T07:58:40.1145637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1145709Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1145975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1146060Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1146286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1146366Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1146623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1146725Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1146981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1147066Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1147323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1147518Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1147815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1147940Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1147944Z 2025-09-07T07:58:40.1148054Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1148248Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1148325Z return mod(**inputs) 2025-09-07T07:58:40.1148589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1148657Z outputs = self.deberta( 2025-09-07T07:58:40.1148920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1148990Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1149256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1149341Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1149558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1149633Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1149889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1150050Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1150306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1150387Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1150644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1150753Z context_layer = torch.bmm( 2025-09-07T07:58:40.1150764Z 2025-09-07T07:58:40.1150864Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1151053Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1151126Z return mod(**inputs) 2025-09-07T07:58:40.1151387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1151464Z outputs = self.deberta( 2025-09-07T07:58:40.1151720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1151791Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1152054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1152141Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1152361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1152438Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1152693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1152789Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1153049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1153130Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1153385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1153571Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1153578Z 2025-09-07T07:58:40.1153660Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1153738Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1153821Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1153895Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1153977Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1154054Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1154157Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1154351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1154415Z return mod(**inputs) 2025-09-07T07:58:40.1154684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1154753Z outputs = self.deberta( 2025-09-07T07:58:40.1155012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1155092Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1155348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1155438Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1155652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1155766Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1156036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1156128Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1156399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1156503Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1156773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1156959Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1157273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1157409Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1157412Z 2025-09-07T07:58:40.1157512Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1157706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1157770Z return mod(**inputs) 2025-09-07T07:58:40.1158039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1158107Z outputs = self.deberta( 2025-09-07T07:58:40.1158363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1158444Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1158708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1158803Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1159020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1159098Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1159370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1159464Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1159732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1159811Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1160072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1160284Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1160290Z 2025-09-07T07:58:40.1160394Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1160595Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1160663Z return mod(**inputs) 2025-09-07T07:58:40.1160943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1161017Z outputs = self.deberta( 2025-09-07T07:58:40.1161285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1161369Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1161640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1161773Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1161994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1162076Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1162353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1162447Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1162750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1162832Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1163109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1163322Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1163329Z 2025-09-07T07:58:40.1163409Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1163524Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1163725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1163798Z return mod(**inputs) 2025-09-07T07:58:40.1164078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1164153Z outputs = self.deberta( 2025-09-07T07:58:40.1164431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1164510Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1164803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1164898Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1165139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1165222Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1165512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1165619Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1165906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1165996Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1166284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1166492Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1166895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1167042Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1167047Z 2025-09-07T07:58:40.1167172Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1167385Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1167469Z return mod(**inputs) 2025-09-07T07:58:40.1167779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1167855Z outputs = self.deberta( 2025-09-07T07:58:40.1168165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1168295Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1168588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1168676Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1168901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1168983Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1169276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1169377Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1169647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1169730Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1169993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1170068Z context_layer = torch.bmm( 2025-09-07T07:58:40.1170072Z 2025-09-07T07:58:40.1170184Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1170379Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1170453Z return mod(**inputs) 2025-09-07T07:58:40.1170725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1170794Z outputs = self.deberta( 2025-09-07T07:58:40.1171063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1171137Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1171409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1171497Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1171720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1171798Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1172059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1172163Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1172429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1172513Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1172793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1172982Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1172992Z 2025-09-07T07:58:40.1173072Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1173150Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1173236Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1173311Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1173394Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1173468Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1173572Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1173773Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1173839Z return mod(**inputs) 2025-09-07T07:58:40.1174115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1174184Z outputs = self.deberta( 2025-09-07T07:58:40.1174497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1174576Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1174844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1174938Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1175184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1175265Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1175534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1175626Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1175897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1175977Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1176245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1176431Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1176736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1176873Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1176877Z 2025-09-07T07:58:40.1176980Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1177190Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1177256Z return mod(**inputs) 2025-09-07T07:58:40.1177520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1177596Z outputs = self.deberta( 2025-09-07T07:58:40.1177853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1177930Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1178189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1178280Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1178492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1178569Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1178833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1178925Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1179189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1179262Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1179519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1179730Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1179734Z 2025-09-07T07:58:40.1179835Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1180034Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1180098Z return mod(**inputs) 2025-09-07T07:58:40.1180366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1180466Z outputs = self.deberta( 2025-09-07T07:58:40.1180721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1180797Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1181055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1181170Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1181383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1181460Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1181725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1181817Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1182084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1182160Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1182426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1182626Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1182630Z 2025-09-07T07:58:40.1182706Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1182815Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1183004Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1183074Z return mod(**inputs) 2025-09-07T07:58:40.1183339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1183406Z outputs = self.deberta( 2025-09-07T07:58:40.1183675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1183747Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1184016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1184100Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1184319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1184397Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1184658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1184757Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1185017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1185100Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1185358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1185546Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1185849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1185975Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1185978Z 2025-09-07T07:58:40.1186084Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1186303Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1186376Z return mod(**inputs) 2025-09-07T07:58:40.1186637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1186703Z outputs = self.deberta( 2025-09-07T07:58:40.1186969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1187075Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1187343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1187428Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1187638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1187726Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1187985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1188082Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1188342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1188423Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1188682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1188753Z context_layer = torch.bmm( 2025-09-07T07:58:40.1188756Z 2025-09-07T07:58:40.1188864Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1189054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1189128Z return mod(**inputs) 2025-09-07T07:58:40.1189386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1189451Z outputs = self.deberta( 2025-09-07T07:58:40.1189714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1189784Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1190049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1190132Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1190350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1190427Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1190685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1190783Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1191041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1191123Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1191383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1191564Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1191575Z 2025-09-07T07:58:40.1191653Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1191729Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1191812Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1191889Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1191995Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1192079Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1192184Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1192390Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1192457Z return mod(**inputs) 2025-09-07T07:58:40.1192733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1192834Z outputs = self.deberta( 2025-09-07T07:58:40.1193110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1193189Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1193447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1193540Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1193752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1193828Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1194151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1194243Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1194517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1194594Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1194855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1195052Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1195357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1195494Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1195498Z 2025-09-07T07:58:40.1195601Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1195801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1195870Z return mod(**inputs) 2025-09-07T07:58:40.1196137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1196212Z outputs = self.deberta( 2025-09-07T07:58:40.1196477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1196560Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1196824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1196916Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1197134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1197212Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1197486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1197578Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1197848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1197925Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1198188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1198429Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1198432Z 2025-09-07T07:58:40.1198538Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1198741Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1198806Z return mod(**inputs) 2025-09-07T07:58:40.1199113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1199183Z outputs = self.deberta( 2025-09-07T07:58:40.1199448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1199530Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1199798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1199892Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1200110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1200189Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1200465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1200557Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1200831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1200910Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1201191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1201414Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1201417Z 2025-09-07T07:58:40.1201497Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1201607Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1201801Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1201874Z return mod(**inputs) 2025-09-07T07:58:40.1202148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1202219Z outputs = self.deberta( 2025-09-07T07:58:40.1202494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1202567Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1202851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1202939Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1203170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1203251Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1203529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1203629Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1203898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1203983Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1204255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1204480Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1204800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1204936Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1204939Z 2025-09-07T07:58:40.1205081Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1205282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1205357Z return mod(**inputs) 2025-09-07T07:58:40.1205640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1205717Z outputs = self.deberta( 2025-09-07T07:58:40.1206016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1206096Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1206388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1206480Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1206720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1206888Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1207183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1207291Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1207580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1207679Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1207964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1208043Z context_layer = torch.bmm( 2025-09-07T07:58:40.1208047Z 2025-09-07T07:58:40.1208169Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1208387Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1208463Z return mod(**inputs) 2025-09-07T07:58:40.1208732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1208802Z outputs = self.deberta( 2025-09-07T07:58:40.1209072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1209147Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1209416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1209502Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1209727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1209807Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1210072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1210173Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1210436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1210521Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1210824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1211007Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1211010Z 2025-09-07T07:58:40.1211100Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1211180Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1211263Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1211339Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1211453Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1211537Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1211640Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1211839Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1211915Z return mod(**inputs) 2025-09-07T07:58:40.1212176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1212254Z outputs = self.deberta( 2025-09-07T07:58:40.1212512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1212588Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1212846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1212942Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1213154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1213232Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1213494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1213586Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1213849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1213925Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1214178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1214369Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1214663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1214796Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1214800Z 2025-09-07T07:58:40.1214900Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1215099Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1215163Z return mod(**inputs) 2025-09-07T07:58:40.1215422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1215498Z outputs = self.deberta( 2025-09-07T07:58:40.1215755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1215836Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1216096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1216180Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1216399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1216509Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1216775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1216866Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1217128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1217204Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1217501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1217709Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1217713Z 2025-09-07T07:58:40.1217814Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1218008Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1218075Z return mod(**inputs) 2025-09-07T07:58:40.1218337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1218412Z outputs = self.deberta( 2025-09-07T07:58:40.1218667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1218747Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1219010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1219101Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1219312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1219391Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1219658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1219748Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1220010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1220084Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1220345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1220549Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1220552Z 2025-09-07T07:58:40.1220629Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1220738Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1220930Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1221005Z return mod(**inputs) 2025-09-07T07:58:40.1221266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1221335Z outputs = self.deberta( 2025-09-07T07:58:40.1221601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1221673Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1221939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1222023Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1222241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1222325Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1222610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1222707Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1222962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1223043Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1223330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1223516Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1223818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1223945Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1223951Z 2025-09-07T07:58:40.1224058Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1224250Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1224322Z return mod(**inputs) 2025-09-07T07:58:40.1224586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1224653Z outputs = self.deberta( 2025-09-07T07:58:40.1224923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1224993Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1225256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1225340Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1225558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1225644Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1225910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1226009Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1226280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1226354Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1226622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1226693Z context_layer = torch.bmm( 2025-09-07T07:58:40.1226696Z 2025-09-07T07:58:40.1226804Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1227006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1227080Z return mod(**inputs) 2025-09-07T07:58:40.1227362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1227429Z outputs = self.deberta( 2025-09-07T07:58:40.1227698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1227769Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1228034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1228117Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1228330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1228447Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1228701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1228798Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1229056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1229139Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1229422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1229600Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1229603Z 2025-09-07T07:58:40.1229689Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1229768Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1229852Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1229928Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1230001Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1230083Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1230182Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1230383Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1230447Z return mod(**inputs) 2025-09-07T07:58:40.1230711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1230789Z outputs = self.deberta( 2025-09-07T07:58:40.1231048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1231126Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1231383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1231467Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1231686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1231763Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1232025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1232115Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1232384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1232460Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1232723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 236, in forward 2025-09-07T07:58:40.1232922Z query_layer = self.transpose_for_scores(self.query_proj(query_states), self.num_attention_heads) 2025-09-07T07:58:40.1233231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1233375Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1233378Z 2025-09-07T07:58:40.1233487Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1233700Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1233765Z return mod(**inputs) 2025-09-07T07:58:40.1234036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1234112Z outputs = self.deberta( 2025-09-07T07:58:40.1234376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1234494Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1234771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1234855Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1235076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1235181Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1235443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1235533Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1235793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1235870Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1236124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1236330Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1236334Z 2025-09-07T07:58:40.1236434Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1236631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1236698Z return mod(**inputs) 2025-09-07T07:58:40.1236956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1237031Z outputs = self.deberta( 2025-09-07T07:58:40.1237284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1237365Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1237621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1237712Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1237927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1238008Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1238281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1238373Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1238641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1238720Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1238981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 248, in forward 2025-09-07T07:58:40.1239190Z attention_scores = torch.bmm(query_layer, key_layer.transpose(-1, -2) / scale.to(dtype=query_layer.dtype)) 2025-09-07T07:58:40.1239193Z 2025-09-07T07:58:40.1239272Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1239381Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1239575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1239648Z return mod(**inputs) 2025-09-07T07:58:40.1239912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1239979Z outputs = self.deberta( 2025-09-07T07:58:40.1240248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1240353Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1240621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1240707Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1240926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1241038Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1241300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1241399Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1241660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1241745Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1242009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 238, in forward 2025-09-07T07:58:40.1242202Z value_layer = self.transpose_for_scores(self.value_proj(hidden_states), self.num_attention_heads) 2025-09-07T07:58:40.1242518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 194, in transpose_for_scores 2025-09-07T07:58:40.1242651Z return x.permute(0, 2, 1, 3).contiguous().view(-1, x.size(1), x.size(-1)) 2025-09-07T07:58:40.1242654Z 2025-09-07T07:58:40.1242767Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1242964Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1243033Z return mod(**inputs) 2025-09-07T07:58:40.1243316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1243390Z outputs = self.deberta( 2025-09-07T07:58:40.1243666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1243741Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1244018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1244109Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1244331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1244421Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1244693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1244797Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1245198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1245284Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1245572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 268, in forward 2025-09-07T07:58:40.1245647Z context_layer = torch.bmm( 2025-09-07T07:58:40.1245652Z 2025-09-07T07:58:40.1245769Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1245972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1246048Z return mod(**inputs) 2025-09-07T07:58:40.1246329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1231, in forward 2025-09-07T07:58:40.1246399Z outputs = self.deberta( 2025-09-07T07:58:40.1246753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 786, in forward 2025-09-07T07:58:40.1246877Z encoder_outputs = self.encoder( 2025-09-07T07:58:40.1247179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 659, in forward 2025-09-07T07:58:40.1247275Z output_states, attn_weights = layer_module( 2025-09-07T07:58:40.1247561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:58:40.1247657Z return super().__call__(*args, **kwargs) 2025-09-07T07:58:40.1247941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 438, in forward 2025-09-07T07:58:40.1248048Z attention_output, att_matrix = self.attention( 2025-09-07T07:58:40.1248345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 371, in forward 2025-09-07T07:58:40.1248434Z self_output, att_matrix = self.self( 2025-09-07T07:58:40.1248706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 272, in forward 2025-09-07T07:58:40.1248897Z context_layer.view(-1, self.num_attention_heads, context_layer.size(-2), context_layer.size(-1)) 2025-09-07T07:58:40.1248901Z 2025-09-07T07:58:40.1248994Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1249078Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1249169Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1249247Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1249324Z cudagraph partition due to non gpu ops 2025-09-07T07:58:40.1249436Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1249637Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1249713Z return mod(**inputs) 2025-09-07T07:58:40.1249992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1262, in forward 2025-09-07T07:58:40.1250102Z start_loss = loss_fct(start_logits, start_positions) 2025-09-07T07:58:40.1250106Z 2025-09-07T07:58:40.1250218Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:58:40.1250415Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:58:40.1250494Z return mod(**inputs) 2025-09-07T07:58:40.1250770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 1263, in forward 2025-09-07T07:58:40.1250871Z end_loss = loss_fct(end_logits, end_positions) 2025-09-07T07:58:40.1250874Z 2025-09-07T07:59:00.3823375Z Compilation time (from dynamo_timed): 75.009386379 2025-09-07T07:59:00.3828673Z pass 2025-09-07T07:59:00.3829649Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:00.3830891Z TIMING: _recursive_pre_grad_passes:0.08615 _recursive_joint_graph_passes:0.85582 _recursive_post_grad_passes:0.29942 linear_unary_template_precompiling:5.865 linear_unary_template_autotuning:1.23999 bmm_template_precompiling:2.98083 bmm_template_autotuning:0.82157 async_compile.wait:0.83216 code_gen:19.69017 inductor_compile:58.16911 backend_compile:69.93629 gc:0.0003 entire_frame_compile:75.00939 total_wall_time:75.00939 2025-09-07T07:59:00.3832281Z STATS: call_* op count: 1089 | FakeTensorMode.__torch_dispatch__:63398 | FakeTensor.__torch_dispatch__:7675 | ProxyTorchDispatchMode.__torch_dispatch__:16949 2025-09-07T07:59:00.3833017Z Dynamo produced 1 graphs covering 1089 ops with 0 graph breaks (0 unique) 2025-09-07T07:59:03.9566377Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:59:03.9568010Z import pynvml # type: ignore[import] 2025-09-07T07:59:06.5712872Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:59:06.5716602Z from pkg_resources import resource_filename 2025-09-07T07:59:07.2002684Z 2025-09-07T07:59:07.9320280Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:59:07.9320847Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:59:07.9322831Z cpu eval DistilBertForMaskedLM 2025-09-07T07:59:08.0989569Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:08.1676616Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:08.2282168Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:22.8203077Z Autotune Choices Stats: 2025-09-07T07:59:22.8203910Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.016261000382655766} 2025-09-07T07:59:22.8205250Z AUTOTUNE linear_unary(128x768, 768x768, 768) 2025-09-07T07:59:22.8206235Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:59:22.8206615Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:59:22.8207176Z cpp_CppMicroGemmAMX_0 0.0163 ms 100.0% 2025-09-07T07:59:22.8207433Z _linear_pointwise 0.0657 ms 24.8% 2025-09-07T07:59:22.8207854Z SingleProcess AUTOTUNE benchmarking takes 0.2561 seconds and 1.3475 seconds precompiling for 2 choices 2025-09-07T07:59:24.9124889Z Autotune Choices Stats: 2025-09-07T07:59:24.9125362Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.08577800008424674} 2025-09-07T07:59:24.9136770Z AUTOTUNE linear_unary(128x768, 3072x768, 3072) 2025-09-07T07:59:24.9137154Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:59:24.9142581Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:59:24.9146292Z cpp_CppMicroGemmAMX_4 0.0858 ms 100.0% 2025-09-07T07:59:24.9146677Z _linear_pointwise 0.1033 ms 83.0% 2025-09-07T07:59:24.9147200Z SingleProcess AUTOTUNE benchmarking takes 0.2655 seconds and 1.4839 seconds precompiling for 2 choices 2025-09-07T07:59:26.6080063Z Autotune Choices Stats: 2025-09-07T07:59:26.6080687Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.03892649988301855} 2025-09-07T07:59:26.6088458Z AUTOTUNE linear_unary(128x3072, 768x3072, 768) 2025-09-07T07:59:26.6088822Z strides: [3072, 1], [1, 0], [1] 2025-09-07T07:59:26.6089164Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:59:26.6089526Z cpp_CppMicroGemmAMX_5 0.0389 ms 100.0% 2025-09-07T07:59:26.6089874Z _linear_pointwise 0.1055 ms 36.9% 2025-09-07T07:59:26.6090416Z SingleProcess AUTOTUNE benchmarking takes 0.2631 seconds and 1.3529 seconds precompiling for 2 choices 2025-09-07T07:59:30.7949422Z Autotune Choices Stats: 2025-09-07T07:59:30.7950041Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_37", "best_time": 0.5850325001119927} 2025-09-07T07:59:30.7954620Z AUTOTUNE linear_unary(128x768, 30522x768, 30522) 2025-09-07T07:59:30.7954957Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:59:30.7955350Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:59:30.7955755Z cpp_CppMicroGemmAMX_37 0.5850 ms 100.0% 2025-09-07T07:59:30.7956077Z _linear_pointwise 0.6650 ms 88.0% 2025-09-07T07:59:30.7956619Z SingleProcess AUTOTUNE benchmarking takes 0.3681 seconds and 1.3442 seconds precompiling for 2 choices 2025-09-07T07:59:31.0948260Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0948689Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0949365Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0949584Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0949795Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0950006Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0950218Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0950429Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0950632Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0950844Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0951204Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:31.0951612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:31.0951962Z return mod(**inputs) 2025-09-07T07:59:31.0952414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-09-07T07:59:31.0952861Z dlbrt_output = self.distilbert( 2025-09-07T07:59:31.0953281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:31.0953703Z return self.transformer( 2025-09-07T07:59:31.0954124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:31.0954547Z layer_outputs = layer_module( 2025-09-07T07:59:31.0954923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:31.0955307Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:31.0955723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:31.0956147Z sa_output = self.attention( 2025-09-07T07:59:31.0956594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:31.0957081Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:31.0957275Z 2025-09-07T07:59:31.0957361Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0957578Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0957802Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0958009Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0958208Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0958415Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0958622Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0958860Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:31.0959230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:31.0959588Z return mod(**inputs) 2025-09-07T07:59:31.0959987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-09-07T07:59:31.0960405Z dlbrt_output = self.distilbert( 2025-09-07T07:59:31.0960817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:31.0961221Z return self.transformer( 2025-09-07T07:59:31.0961628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:31.0962056Z layer_outputs = layer_module( 2025-09-07T07:59:31.0962415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:31.0962779Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:31.0963192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:31.0963614Z sa_output = self.attention( 2025-09-07T07:59:31.0964073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:31.0964551Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:31.0964739Z 2025-09-07T07:59:31.0964825Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0965035Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0965244Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0965499Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0965712Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0965916Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0966124Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0966377Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:31.0967066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:31.0967435Z return mod(**inputs) 2025-09-07T07:59:31.0967864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-09-07T07:59:31.0968319Z dlbrt_output = self.distilbert( 2025-09-07T07:59:31.0968725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:31.0969133Z return self.transformer( 2025-09-07T07:59:31.0969521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:31.0969926Z layer_outputs = layer_module( 2025-09-07T07:59:31.0970274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:31.0970641Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:31.0971046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:31.0971462Z sa_output = self.attention( 2025-09-07T07:59:31.0971856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:31.0972315Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:31.0972496Z 2025-09-07T07:59:31.0972583Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0972784Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0972993Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0973197Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0973401Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0973604Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0973804Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0974030Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:31.0974380Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:31.0974695Z return mod(**inputs) 2025-09-07T07:59:31.0975084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-09-07T07:59:31.0975497Z dlbrt_output = self.distilbert( 2025-09-07T07:59:31.0975899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:31.0976310Z return self.transformer( 2025-09-07T07:59:31.0976688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:31.0977082Z layer_outputs = layer_module( 2025-09-07T07:59:31.0977419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:31.0977768Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:31.0978223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:31.0978624Z sa_output = self.attention( 2025-09-07T07:59:31.0979006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:31.0979455Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:31.0979629Z 2025-09-07T07:59:31.0979749Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0979947Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0980151Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0980372Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0980569Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0980768Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0980963Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0981200Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:31.0981554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:31.0981877Z return mod(**inputs) 2025-09-07T07:59:31.0982258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-09-07T07:59:31.0982654Z dlbrt_output = self.distilbert( 2025-09-07T07:59:31.0983050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:31.0983446Z return self.transformer( 2025-09-07T07:59:31.0983825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:31.0984211Z layer_outputs = layer_module( 2025-09-07T07:59:31.0984545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:31.0984900Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:31.0985300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:31.0985691Z sa_output = self.attention( 2025-09-07T07:59:31.0986064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:31.0986509Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:31.0986689Z 2025-09-07T07:59:31.0986765Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0986971Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0987166Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0987369Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0987566Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0987767Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0987962Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0988190Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:31.0988539Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:31.0988854Z return mod(**inputs) 2025-09-07T07:59:31.0989228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 826, in forward 2025-09-07T07:59:31.0989618Z dlbrt_output = self.distilbert( 2025-09-07T07:59:31.0990013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:31.0990409Z return self.transformer( 2025-09-07T07:59:31.0990790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:31.0994933Z layer_outputs = layer_module( 2025-09-07T07:59:31.0995303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:31.0995656Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:31.0996055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:31.0996441Z sa_output = self.attention( 2025-09-07T07:59:31.0996899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:31.0997358Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:31.0997529Z 2025-09-07T07:59:31.0997613Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0997817Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0998011Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0998206Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0998436Z cudagraph partition due to non gpu ops 2025-09-07T07:59:31.0998654Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:31.0999002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:31.0999316Z return mod(**inputs) 2025-09-07T07:59:31.0999688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 843, in forward 2025-09-07T07:59:31.1000212Z mlm_loss = self.mlm_loss_fct(prediction_logits.view(-1, prediction_logits.size(-1)), labels.view(-1)) 2025-09-07T07:59:31.1000451Z 2025-09-07T07:59:35.8970551Z Compilation time (from dynamo_timed): 26.564998383 2025-09-07T07:59:35.8976204Z pass 2025-09-07T07:59:35.8982156Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:35.8987304Z TIMING: _recursive_pre_grad_passes:0.01881 _recursive_joint_graph_passes:0.26051 _recursive_post_grad_passes:0.04568 linear_unary_template_precompiling:5.53394 linear_unary_template_autotuning:1.14607 async_compile.wait:0.78242 code_gen:4.58006 inductor_compile:22.41312 backend_compile:25.1416 gc:0.00031 entire_frame_compile:26.565 total_wall_time:26.565 2025-09-07T07:59:35.8989016Z STATS: call_* op count: 155 | FakeTensorMode.__torch_dispatch__:14295 | FakeTensor.__torch_dispatch__:1621 | ProxyTorchDispatchMode.__torch_dispatch__:3779 2025-09-07T07:59:35.8989834Z Dynamo produced 1 graphs covering 155 ops with 0 graph breaks (0 unique) 2025-09-07T07:59:38.3888472Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T07:59:38.3889443Z import pynvml # type: ignore[import] 2025-09-07T07:59:41.0225330Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T07:59:41.0226338Z from pkg_resources import resource_filename 2025-09-07T07:59:41.6774296Z 2025-09-07T07:59:42.2334192Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:59:42.2334743Z loading model: 0it [00:00, ?it/s] 2025-09-07T07:59:42.2340562Z cpu eval DistilBertForQuestionAnswering 2025-09-07T07:59:42.3522959Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:42.4038078Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:42.4491941Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T07:59:59.7116249Z Autotune Choices Stats: 2025-09-07T07:59:59.7117164Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_36", "best_time": 0.005049000264989445} 2025-09-07T07:59:59.7125986Z AUTOTUNE linear_unary(128x768, 2x768, 2) 2025-09-07T07:59:59.7126264Z strides: [768, 1], [1, 0], [1] 2025-09-07T07:59:59.7126539Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T07:59:59.7130198Z cpp_CppMicroGemmAMX_36 0.0050 ms 100.0% 2025-09-07T07:59:59.7130708Z _linear_pointwise 0.0374 ms 13.5% 2025-09-07T07:59:59.7132829Z SingleProcess AUTOTUNE benchmarking takes 0.2530 seconds and 1.2910 seconds precompiling for 2 choices 2025-09-07T07:59:59.9968925Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9969734Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9970053Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9970267Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9970479Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9970692Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9970979Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:59.9971384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:59.9971736Z return mod(**inputs) 2025-09-07T07:59:59.9972182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1043, in forward 2025-09-07T07:59:59.9972666Z logits = self.qa_outputs(hidden_states) # (bs, max_query_len, 2) 2025-09-07T07:59:59.9972864Z 2025-09-07T07:59:59.9972960Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9973180Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9973396Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9973602Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9973847Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:59.9974221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:59.9974571Z return mod(**inputs) 2025-09-07T07:59:59.9974983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-09-07T07:59:59.9975419Z distilbert_output = self.distilbert( 2025-09-07T07:59:59.9975857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:59.9976288Z return self.transformer( 2025-09-07T07:59:59.9976703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:59.9977128Z layer_outputs = layer_module( 2025-09-07T07:59:59.9977482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:59.9977949Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:59.9978384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:59.9978818Z sa_output = self.attention( 2025-09-07T07:59:59.9979252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:59.9979749Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:59.9979950Z 2025-09-07T07:59:59.9980032Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9980256Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9980472Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9980683Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9980896Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9981109Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9981320Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9981558Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:59.9982356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:59.9982720Z return mod(**inputs) 2025-09-07T07:59:59.9983157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-09-07T07:59:59.9983587Z distilbert_output = self.distilbert( 2025-09-07T07:59:59.9984017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:59.9984655Z return self.transformer( 2025-09-07T07:59:59.9985062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:59.9985479Z layer_outputs = layer_module( 2025-09-07T07:59:59.9985835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:59.9986253Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:59.9986677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:59.9987098Z sa_output = self.attention( 2025-09-07T07:59:59.9987502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:59.9987969Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:59.9988160Z 2025-09-07T07:59:59.9988241Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9988452Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9988658Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9988856Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9989062Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9989268Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9989474Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9989705Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:59.9990062Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:59.9990387Z return mod(**inputs) 2025-09-07T07:59:59.9990773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-09-07T07:59:59.9991188Z distilbert_output = self.distilbert( 2025-09-07T07:59:59.9991597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:59.9992002Z return self.transformer( 2025-09-07T07:59:59.9992392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T07:59:59.9992796Z layer_outputs = layer_module( 2025-09-07T07:59:59.9993138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T07:59:59.9993501Z return super().__call__(*args, **kwargs) 2025-09-07T07:59:59.9993909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T07:59:59.9994312Z sa_output = self.attention( 2025-09-07T07:59:59.9994707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T07:59:59.9995166Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T07:59:59.9995355Z 2025-09-07T07:59:59.9995434Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9995642Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9995849Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9996047Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9996250Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9996504Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9996708Z cudagraph partition due to non gpu ops 2025-09-07T07:59:59.9996935Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T07:59:59.9997331Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T07:59:59.9997667Z return mod(**inputs) 2025-09-07T07:59:59.9998071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-09-07T07:59:59.9998583Z distilbert_output = self.distilbert( 2025-09-07T07:59:59.9999004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T07:59:59.9999423Z return self.transformer( 2025-09-07T07:59:59.9999813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T08:00:00.0000222Z layer_outputs = layer_module( 2025-09-07T08:00:00.0000578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:00.0000942Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:00.0001367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T08:00:00.0001803Z sa_output = self.attention( 2025-09-07T08:00:00.0002241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T08:00:00.0002739Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:00.0002945Z 2025-09-07T08:00:00.0003032Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0003259Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0003487Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0003714Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0003930Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0004155Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0004380Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0004638Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:00.0005021Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:00:00.0005378Z return mod(**inputs) 2025-09-07T08:00:00.0005811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-09-07T08:00:00.0006266Z distilbert_output = self.distilbert( 2025-09-07T08:00:00.0006714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T08:00:00.0007479Z return self.transformer( 2025-09-07T08:00:00.0007920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T08:00:00.0008371Z layer_outputs = layer_module( 2025-09-07T08:00:00.0008757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:00.0009146Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:00.0009599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T08:00:00.0010044Z sa_output = self.attention( 2025-09-07T08:00:00.0010476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T08:00:00.0010981Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:00.0011183Z 2025-09-07T08:00:00.0011267Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0011539Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0011789Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0012014Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0012229Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0012454Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0012679Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0012937Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:00.0013330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:00:00.0013796Z return mod(**inputs) 2025-09-07T08:00:00.0014226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1031, in forward 2025-09-07T08:00:00.0014694Z distilbert_output = self.distilbert( 2025-09-07T08:00:00.0015174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 736, in forward 2025-09-07T08:00:00.0015623Z return self.transformer( 2025-09-07T08:00:00.0016050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 541, in forward 2025-09-07T08:00:00.0016507Z layer_outputs = layer_module( 2025-09-07T08:00:00.0016879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:00.0017279Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:00.0017727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 476, in forward 2025-09-07T08:00:00.0018174Z sa_output = self.attention( 2025-09-07T08:00:00.0018604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 402, in forward 2025-09-07T08:00:00.0019116Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:00.0019322Z 2025-09-07T08:00:00.0019427Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0019661Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0019889Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0020107Z cudagraph partition due to non gpu ops 2025-09-07T08:00:00.0020361Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:00.0020760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:00:00.0021120Z return mod(**inputs) 2025-09-07T08:00:00.0021549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1061, in forward 2025-09-07T08:00:00.0022031Z start_loss = loss_fct(start_logits, start_positions) 2025-09-07T08:00:00.0022210Z 2025-09-07T08:00:00.0022323Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:00.0022712Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:00:00.0023069Z return mod(**inputs) 2025-09-07T08:00:00.0023486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 1062, in forward 2025-09-07T08:00:00.0023956Z end_loss = loss_fct(end_logits, end_positions) 2025-09-07T08:00:00.0024122Z 2025-09-07T08:00:04.4739183Z Compilation time (from dynamo_timed): 20.957183946 2025-09-07T08:00:04.4739527Z pass 2025-09-07T08:00:04.4739844Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:00:04.4740948Z TIMING: _recursive_pre_grad_passes:0.01895 _recursive_joint_graph_passes:0.24906 _recursive_post_grad_passes:0.05022 linear_unary_template_precompiling:1.29726 linear_unary_template_autotuning:0.25132 async_compile.wait:0.683 code_gen:4.27167 inductor_compile:16.87207 backend_compile:19.54988 gc:0.00031 entire_frame_compile:20.95718 total_wall_time:20.95718 2025-09-07T08:00:04.4742180Z STATS: call_* op count: 163 | FakeTensorMode.__torch_dispatch__:14165 | FakeTensor.__torch_dispatch__:1651 | ProxyTorchDispatchMode.__torch_dispatch__:3793 2025-09-07T08:00:04.4743138Z Dynamo produced 1 graphs covering 163 ops with 0 graph breaks (0 unique) 2025-09-07T08:00:07.1402856Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:00:07.1404441Z import pynvml # type: ignore[import] 2025-09-07T08:00:09.8730008Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:00:09.8731050Z from pkg_resources import resource_filename 2025-09-07T08:00:10.5215898Z 2025-09-07T08:00:12.4919743Z loading model: 0it [00:00, ?it/s]`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`. 2025-09-07T08:00:12.4920679Z WARNING:transformers.modeling_utils:`loss_type=None` was set in the config but it is unrecognised.Using the default loss: `ForCausalLMLoss`. 2025-09-07T08:00:12.5244888Z 2025-09-07T08:00:12.5245856Z loading model: 0it [00:02, ?it/s] 2025-09-07T08:00:12.5246206Z cpu eval DistillGPT2 2025-09-07T08:00:12.9255738Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:00:13.0501908Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:00:13.1675831Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:00:29.1428605Z Autotune Choices Stats: 2025-09-07T08:00:29.1429319Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.12466799989852007} 2025-09-07T08:00:29.1444696Z AUTOTUNE linear_unary(512x768, 2304x768, 2304) 2025-09-07T08:00:29.1445839Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:00:29.1446404Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:00:29.1447016Z cpp_CppMicroGemmAMX_0 0.1247 ms 100.0% 2025-09-07T08:00:29.1447380Z _linear_pointwise 0.1841 ms 67.7% 2025-09-07T08:00:29.1447999Z SingleProcess AUTOTUNE benchmarking takes 0.2875 seconds and 1.3789 seconds precompiling for 2 choices 2025-09-07T08:00:31.0651002Z Autotune Choices Stats: 2025-09-07T08:00:31.0651625Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_2", "best_time": 0.1784289997885935} 2025-09-07T08:00:31.0660722Z AUTOTUNE linear_unary(512x768, 3072x768, 3072) 2025-09-07T08:00:31.0661033Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:00:31.0661296Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:00:31.0661624Z cpp_CppMicroGemmAMX_2 0.1784 ms 100.0% 2025-09-07T08:00:31.0661865Z _linear_pointwise 0.2174 ms 82.1% 2025-09-07T08:00:31.0662236Z SingleProcess AUTOTUNE benchmarking takes 0.2941 seconds and 1.3694 seconds precompiling for 2 choices 2025-09-07T08:00:35.1373425Z Autotune Choices Stats: 2025-09-07T08:00:35.1373853Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_24", "best_time": 3.0036515001938824} 2025-09-07T08:00:35.1392357Z AUTOTUNE linear_unary(512x768, 50257x768) 2025-09-07T08:00:35.1400011Z strides: [768, 1], [1, 0] 2025-09-07T08:00:35.1402461Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:00:35.1402922Z cpp_CppMicroGemmAMX_24 3.0037 ms 100.0% 2025-09-07T08:00:35.1403292Z _linear_pointwise 7.0130 ms 42.8% 2025-09-07T08:00:35.1404473Z SingleProcess AUTOTUNE benchmarking takes 0.8530 seconds and 1.3715 seconds precompiling for 2 choices 2025-09-07T08:00:35.5126844Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5127751Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5127990Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5128209Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5128439Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5130321Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5130795Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5131411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5132085Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5132569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5132991Z outputs = block( 2025-09-07T08:00:35.5133361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5133782Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5134222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5134640Z return func(*args, **kwargs) 2025-09-07T08:00:35.5135073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5135514Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5135943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5136380Z return func(*args, **kwargs) 2025-09-07T08:00:35.5136783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:00:35.5137325Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:00:35.5137830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5138282Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5138480Z 2025-09-07T08:00:35.5138576Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5138815Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5139042Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5139266Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5139524Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5139985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5140442Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5140868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5141282Z outputs = block( 2025-09-07T08:00:35.5141634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5142093Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5142519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5142938Z return func(*args, **kwargs) 2025-09-07T08:00:35.5143342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5143779Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5144221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5144634Z return func(*args, **kwargs) 2025-09-07T08:00:35.5145378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5145893Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5146420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:00:35.5146953Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:35.5147165Z 2025-09-07T08:00:35.5147286Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5147739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5148229Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5148661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5149068Z outputs = block( 2025-09-07T08:00:35.5149422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5149827Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5150218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5150597Z return func(*args, **kwargs) 2025-09-07T08:00:35.5150973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5151373Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5151767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5152135Z return func(*args, **kwargs) 2025-09-07T08:00:35.5152506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5153018Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5153475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:00:35.5153945Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:00:35.5154113Z 2025-09-07T08:00:35.5154225Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5154646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5155053Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5155449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5155818Z outputs = block( 2025-09-07T08:00:35.5156151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5156521Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5156905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5157285Z return func(*args, **kwargs) 2025-09-07T08:00:35.5157650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5158049Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5158443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5158816Z return func(*args, **kwargs) 2025-09-07T08:00:35.5159184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5159577Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5159941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5160349Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5160576Z 2025-09-07T08:00:35.5160693Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5161112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5161511Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5161906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5162286Z outputs = block( 2025-09-07T08:00:35.5162646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5163010Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5163396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5163780Z return func(*args, **kwargs) 2025-09-07T08:00:35.5164158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5164565Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5164951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5165374Z return func(*args, **kwargs) 2025-09-07T08:00:35.5165766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5166196Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5166574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5167169Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5167370Z 2025-09-07T08:00:35.5167486Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5167937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5168371Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5168784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5169189Z outputs = block( 2025-09-07T08:00:35.5169539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5169939Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5170344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5170746Z return func(*args, **kwargs) 2025-09-07T08:00:35.5171143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:00:35.5171587Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:00:35.5172031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:00:35.5172447Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:00:35.5172839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5173266Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5173451Z 2025-09-07T08:00:35.5173573Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5174021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5174466Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5174883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5175315Z outputs = block( 2025-09-07T08:00:35.5175679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5176066Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5176469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5176869Z return func(*args, **kwargs) 2025-09-07T08:00:35.5177297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5177701Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5178082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5178458Z return func(*args, **kwargs) 2025-09-07T08:00:35.5178837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:00:35.5179324Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:00:35.5179783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5180165Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5180343Z 2025-09-07T08:00:35.5180426Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5180644Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5180859Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5181059Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5181296Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5181710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5182113Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5182506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5182879Z outputs = block( 2025-09-07T08:00:35.5183206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5183582Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5183955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5184321Z return func(*args, **kwargs) 2025-09-07T08:00:35.5184687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5185075Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5185460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5185826Z return func(*args, **kwargs) 2025-09-07T08:00:35.5186192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5186610Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5187047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:00:35.5187523Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:35.5187705Z 2025-09-07T08:00:35.5187824Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5188239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5188645Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5189032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5189460Z outputs = block( 2025-09-07T08:00:35.5189777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5190137Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5190513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5190883Z return func(*args, **kwargs) 2025-09-07T08:00:35.5191310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5191733Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5192136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5192522Z return func(*args, **kwargs) 2025-09-07T08:00:35.5192908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5193315Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5193751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:00:35.5194250Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:00:35.5194435Z 2025-09-07T08:00:35.5194551Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5195007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5195435Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5195858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5196270Z outputs = block( 2025-09-07T08:00:35.5196624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5197023Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5197428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5197845Z return func(*args, **kwargs) 2025-09-07T08:00:35.5198248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5198722Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5199104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5199462Z return func(*args, **kwargs) 2025-09-07T08:00:35.5199817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5200198Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5200550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5200934Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5201113Z 2025-09-07T08:00:35.5201218Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5201631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5202035Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5202436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5202812Z outputs = block( 2025-09-07T08:00:35.5203145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5203520Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5203909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5204325Z return func(*args, **kwargs) 2025-09-07T08:00:35.5204692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5205093Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5205489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5205919Z return func(*args, **kwargs) 2025-09-07T08:00:35.5206301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5206716Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5207192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5207625Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5207815Z 2025-09-07T08:00:35.5207939Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5208357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5208751Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5209135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5209515Z outputs = block( 2025-09-07T08:00:35.5209830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5210174Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5210542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5210902Z return func(*args, **kwargs) 2025-09-07T08:00:35.5211259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:00:35.5211650Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:00:35.5212043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:00:35.5212424Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:00:35.5212778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5213160Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5213324Z 2025-09-07T08:00:35.5213425Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5213822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5214200Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5214577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5214937Z outputs = block( 2025-09-07T08:00:35.5215242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5215591Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5215956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5216317Z return func(*args, **kwargs) 2025-09-07T08:00:35.5216665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5217046Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5217422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5217818Z return func(*args, **kwargs) 2025-09-07T08:00:35.5218173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:00:35.5218643Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:00:35.5219089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5219472Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5219671Z 2025-09-07T08:00:35.5219764Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5219982Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5220189Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5220396Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5220632Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5221045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5221430Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5221816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5222188Z outputs = block( 2025-09-07T08:00:35.5222509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5222872Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5223241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5223604Z return func(*args, **kwargs) 2025-09-07T08:00:35.5223963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5224354Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5224735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5225101Z return func(*args, **kwargs) 2025-09-07T08:00:35.5225464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5225868Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5226321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:00:35.5226799Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:35.5226989Z 2025-09-07T08:00:35.5227097Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5227516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5227921Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5228305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5228666Z outputs = block( 2025-09-07T08:00:35.5228982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5229341Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5229721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5230080Z return func(*args, **kwargs) 2025-09-07T08:00:35.5230457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5230861Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5231258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5231672Z return func(*args, **kwargs) 2025-09-07T08:00:35.5232038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5232463Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5232903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:00:35.5233362Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:00:35.5234160Z 2025-09-07T08:00:35.5234277Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5234682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5235079Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5235465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5235859Z outputs = block( 2025-09-07T08:00:35.5236182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5236539Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5236924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5237304Z return func(*args, **kwargs) 2025-09-07T08:00:35.5237675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5238078Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5238462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5238832Z return func(*args, **kwargs) 2025-09-07T08:00:35.5239200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5239597Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5239954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5240359Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5240541Z 2025-09-07T08:00:35.5240649Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5241069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5241469Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5241853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5242230Z outputs = block( 2025-09-07T08:00:35.5242559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5242940Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5243340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5243740Z return func(*args, **kwargs) 2025-09-07T08:00:35.5244133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5244566Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5244985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5245551Z return func(*args, **kwargs) 2025-09-07T08:00:35.5245956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5246381Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5246898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5247325Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5247517Z 2025-09-07T08:00:35.5247632Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5248075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5248514Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5248958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5249332Z outputs = block( 2025-09-07T08:00:35.5249663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5250032Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5250420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5250800Z return func(*args, **kwargs) 2025-09-07T08:00:35.5251166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5251568Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5251962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5252342Z return func(*args, **kwargs) 2025-09-07T08:00:35.5252707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:00:35.5253211Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:00:35.5253680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5254079Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5254251Z 2025-09-07T08:00:35.5254342Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5254558Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5254776Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5254988Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5255228Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5255694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5256069Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5256447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5256812Z outputs = block( 2025-09-07T08:00:35.5257141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5257484Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5257849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5258205Z return func(*args, **kwargs) 2025-09-07T08:00:35.5258561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5258939Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5259308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5259663Z return func(*args, **kwargs) 2025-09-07T08:00:35.5260022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5260409Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5260869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:00:35.5261331Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:35.5261515Z 2025-09-07T08:00:35.5261618Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5262012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5262417Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5262789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5263153Z outputs = block( 2025-09-07T08:00:35.5263470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5263825Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5264198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5264557Z return func(*args, **kwargs) 2025-09-07T08:00:35.5264919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5265305Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5265688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5266051Z return func(*args, **kwargs) 2025-09-07T08:00:35.5266404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5266797Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5267227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:00:35.5267674Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:00:35.5267836Z 2025-09-07T08:00:35.5267941Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5268349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5268738Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5269120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5269484Z outputs = block( 2025-09-07T08:00:35.5269795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5270153Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5270524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5270893Z return func(*args, **kwargs) 2025-09-07T08:00:35.5271253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5271630Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5272010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5272375Z return func(*args, **kwargs) 2025-09-07T08:00:35.5272737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5273110Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5273461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5273851Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5274037Z 2025-09-07T08:00:35.5274162Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5274562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5274934Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5275309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5275666Z outputs = block( 2025-09-07T08:00:35.5276002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5276348Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5276705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5277058Z return func(*args, **kwargs) 2025-09-07T08:00:35.5277410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5277788Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5278162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5278527Z return func(*args, **kwargs) 2025-09-07T08:00:35.5278888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5279269Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5279620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5280007Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5280185Z 2025-09-07T08:00:35.5280293Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5280707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5281111Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5281503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5281874Z outputs = block( 2025-09-07T08:00:35.5282202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5282569Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5282958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5283329Z return func(*args, **kwargs) 2025-09-07T08:00:35.5283704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:00:35.5284152Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:00:35.5284604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:00:35.5285031Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:00:35.5285412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5285832Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5286022Z 2025-09-07T08:00:35.5286134Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5286582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5287102Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5287545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5288030Z outputs = block( 2025-09-07T08:00:35.5288411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5288793Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5289162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5289530Z return func(*args, **kwargs) 2025-09-07T08:00:35.5289900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5290334Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5290726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5291091Z return func(*args, **kwargs) 2025-09-07T08:00:35.5291455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:00:35.5291944Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:00:35.5292415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5292808Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5292978Z 2025-09-07T08:00:35.5293060Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5293277Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5293488Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5293699Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5293927Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5294340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5294731Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5295117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5295487Z outputs = block( 2025-09-07T08:00:35.5295799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5296156Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5296532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5296901Z return func(*args, **kwargs) 2025-09-07T08:00:35.5297261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5297650Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5298039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5298405Z return func(*args, **kwargs) 2025-09-07T08:00:35.5298767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5299168Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5299598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:00:35.5300071Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:35.5300252Z 2025-09-07T08:00:35.5300366Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5300778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5301162Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5301550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5301918Z outputs = block( 2025-09-07T08:00:35.5302283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5302629Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5302995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5303356Z return func(*args, **kwargs) 2025-09-07T08:00:35.5303706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5304165Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5304549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5304916Z return func(*args, **kwargs) 2025-09-07T08:00:35.5305282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5305687Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5306125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:00:35.5306571Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:00:35.5306746Z 2025-09-07T08:00:35.5306854Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5307287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5307677Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5308060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5308420Z outputs = block( 2025-09-07T08:00:35.5308736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5309099Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5309471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5309838Z return func(*args, **kwargs) 2025-09-07T08:00:35.5310211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5310611Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5311008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5311383Z return func(*args, **kwargs) 2025-09-07T08:00:35.5311743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5312137Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5312497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5312903Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5313076Z 2025-09-07T08:00:35.5313189Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5313596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5314002Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5314382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5314746Z outputs = block( 2025-09-07T08:00:35.5315056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5315411Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5315787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5316199Z return func(*args, **kwargs) 2025-09-07T08:00:35.5316567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5316959Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5317378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5317776Z return func(*args, **kwargs) 2025-09-07T08:00:35.5318199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5318605Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5318979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5319373Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5319549Z 2025-09-07T08:00:35.5319664Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5320084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5320476Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5320868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5321244Z outputs = block( 2025-09-07T08:00:35.5321574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5321945Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5322345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5322743Z return func(*args, **kwargs) 2025-09-07T08:00:35.5323136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5323562Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5323971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5324368Z return func(*args, **kwargs) 2025-09-07T08:00:35.5324764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:00:35.5325306Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:00:35.5325802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5326219Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5326410Z 2025-09-07T08:00:35.5326501Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5326960Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5327285Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5327762Z cudagraph partition due to non gpu ops 2025-09-07T08:00:35.5328071Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5328584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5329067Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5329531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5329966Z outputs = block( 2025-09-07T08:00:35.5345422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5345830Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5346235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5346822Z return func(*args, **kwargs) 2025-09-07T08:00:35.5347203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5347617Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5348014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5348395Z return func(*args, **kwargs) 2025-09-07T08:00:35.5348833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5349247Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5349698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:00:35.5350183Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:00:35.5350374Z 2025-09-07T08:00:35.5350495Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5350913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5351320Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5351716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5352096Z outputs = block( 2025-09-07T08:00:35.5352430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5352793Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5353174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5353548Z return func(*args, **kwargs) 2025-09-07T08:00:35.5353917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5354304Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5354693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5355064Z return func(*args, **kwargs) 2025-09-07T08:00:35.5355431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:00:35.5355834Z attn_output, attn_weights = attention_interface( 2025-09-07T08:00:35.5356266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:00:35.5356724Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:00:35.5356894Z 2025-09-07T08:00:35.5357003Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5357432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5357818Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5358191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5358554Z outputs = block( 2025-09-07T08:00:35.5358870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5359229Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5359593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5359955Z return func(*args, **kwargs) 2025-09-07T08:00:35.5360311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5360739Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5361128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5361499Z return func(*args, **kwargs) 2025-09-07T08:00:35.5361875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5362269Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5362667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5363080Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5363264Z 2025-09-07T08:00:35.5363374Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5363820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5364252Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5364669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5365071Z outputs = block( 2025-09-07T08:00:35.5365420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5365813Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5366227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5366624Z return func(*args, **kwargs) 2025-09-07T08:00:35.5367101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:00:35.5367527Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:00:35.5367945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5368349Z return func(*args, **kwargs) 2025-09-07T08:00:35.5368727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:00:35.5369097Z attn_output = self.c_proj(attn_output) 2025-09-07T08:00:35.5369449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5369837Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5370014Z 2025-09-07T08:00:35.5370131Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:35.5370544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1183, in forward 2025-09-07T08:00:35.5370929Z transformer_outputs = self.transformer( 2025-09-07T08:00:35.5371315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:00:35.5371686Z outputs = block( 2025-09-07T08:00:35.5372003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:00:35.5372358Z return super().__call__(*args, **kwargs) 2025-09-07T08:00:35.5372733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:00:35.5373098Z return func(*args, **kwargs) 2025-09-07T08:00:35.5373467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:00:35.5373891Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:00:35.5374300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:00:35.5374695Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:00:35.5375080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:00:35.5375516Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:00:35.5375682Z 2025-09-07T08:00:35.5375774Z cudagraph partition due to non gpu ops 2025-09-07T08:00:39.6588342Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:00:39.6594302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-09-07T08:00:39.6596122Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-09-07T08:00:39.6596717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-09-07T08:00:39.6597329Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-09-07T08:00:39.6597634Z 2025-09-07T08:00:40.8638837Z Compilation time (from dynamo_timed): 26.336473228 2025-09-07T08:00:40.8755151Z pass 2025-09-07T08:00:40.8760620Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:00:40.8761667Z TIMING: _recursive_pre_grad_passes:0.0273 _recursive_joint_graph_passes:0.21965 inductor_compile:21.3227 backend_compile:23.31662 gc:0.00382 entire_frame_compile:26.33647 _recursive_post_grad_passes:0.05426 linear_unary_template_precompiling:4.12328 linear_unary_template_autotuning:1.42901 async_compile.wait:1.60889 code_gen:4.81812 total_wall_time:26.33647 2025-09-07T08:00:40.8762801Z STATS: call_* op count: 301 | FakeTensorMode.__torch_dispatch__:13702 | FakeTensor.__torch_dispatch__:1972 | ProxyTorchDispatchMode.__torch_dispatch__:2742 2025-09-07T08:00:40.8763316Z Dynamo produced 3 graphs covering 301 ops with 2 graph breaks (1 unique) 2025-09-07T08:00:43.6913739Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:00:43.6914740Z import pynvml # type: ignore[import] 2025-09-07T08:00:46.3532000Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:00:46.3532930Z from pkg_resources import resource_filename 2025-09-07T08:00:47.0217495Z 2025-09-07T08:00:47.0228094Z loading model: 0it [00:00, ?it/s]If you want to use `ElectraForCausalLM` as a standalone, add `is_decoder=True.` 2025-09-07T08:00:47.0228726Z WARNING:transformers.models.electra.modeling_electra:If you want to use `ElectraForCausalLM` as a standalone, add `is_decoder=True.` 2025-09-07T08:00:47.2560483Z 2025-09-07T08:00:47.2561433Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:00:47.2561811Z cpu eval ElectraForCausalLM 2025-09-07T08:00:47.4332035Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:00:47.5278254Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:00:47.6233939Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:01:06.5325900Z Autotune Choices Stats: 2025-09-07T08:01:06.5326566Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.010905000181082869} 2025-09-07T08:01:06.5337447Z AUTOTUNE linear_unary(512x128, 256x128, 256) 2025-09-07T08:01:06.5337779Z strides: [128, 1], [1, 0], [1] 2025-09-07T08:01:06.5338675Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:01:06.5341915Z cpp_CppMicroGemmAMX_0 0.0109 ms 100.0% 2025-09-07T08:01:06.5342607Z _linear_pointwise 0.0570 ms 19.1% 2025-09-07T08:01:06.5343142Z SingleProcess AUTOTUNE benchmarking takes 0.2557 seconds and 1.2966 seconds precompiling for 2 choices 2025-09-07T08:01:08.1798356Z Autotune Choices Stats: 2025-09-07T08:01:08.1798824Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_1", "best_time": 0.01819399994928972} 2025-09-07T08:01:08.1807128Z AUTOTUNE linear_unary(512x256, 256x256, 256) 2025-09-07T08:01:08.1807402Z strides: [256, 1], [1, 0], [1] 2025-09-07T08:01:08.1808061Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:01:08.1808356Z cpp_CppMicroGemmAMX_1 0.0182 ms 100.0% 2025-09-07T08:01:08.1808595Z _linear_pointwise 0.0625 ms 29.1% 2025-09-07T08:01:08.1808961Z SingleProcess AUTOTUNE benchmarking takes 0.2568 seconds and 1.3083 seconds precompiling for 2 choices 2025-09-07T08:01:09.9834666Z Autotune Choices Stats: 2025-09-07T08:01:09.9835268Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.01862800036178669} 2025-09-07T08:01:09.9846732Z AUTOTUNE linear_binary(512x256, 512x256, 256x256, 256) 2025-09-07T08:01:09.9847203Z strides: [256, 1], [256, 1], [1, 0], [1] 2025-09-07T08:01:09.9847631Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:01:09.9848064Z cpp_CppMicroGemmAMX_4 0.0186 ms 100.0% 2025-09-07T08:01:09.9848402Z _linear_pointwise.binary 0.0706 ms 26.4% 2025-09-07T08:01:09.9848927Z SingleProcess AUTOTUNE benchmarking takes 0.2601 seconds and 1.3056 seconds precompiling for 2 choices 2025-09-07T08:01:11.8555218Z Autotune Choices Stats: 2025-09-07T08:01:11.8555879Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.0780910004323232} 2025-09-07T08:01:11.8566624Z AUTOTUNE linear_unary(512x256, 1024x256, 1024) 2025-09-07T08:01:11.8567190Z strides: [256, 1], [1, 0], [1] 2025-09-07T08:01:11.8567479Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:01:11.8567875Z cpp_CppMicroGemmAMX_5 0.0781 ms 100.0% 2025-09-07T08:01:11.8568238Z _linear_pointwise 0.0850 ms 91.9% 2025-09-07T08:01:11.8569393Z SingleProcess AUTOTUNE benchmarking takes 0.2685 seconds and 1.4695 seconds precompiling for 2 choices 2025-09-07T08:01:13.5556003Z Autotune Choices Stats: 2025-09-07T08:01:13.5556608Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_6", "best_time": 0.025568000182829564} 2025-09-07T08:01:13.5567564Z AUTOTUNE linear_binary(512x1024, 512x256, 256x1024, 256) 2025-09-07T08:01:13.5568097Z strides: [1024, 1], [256, 1], [1, 0], [1] 2025-09-07T08:01:13.5568517Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:01:13.5568886Z cpp_CppMicroGemmAMX_6 0.0256 ms 100.0% 2025-09-07T08:01:13.5569294Z _linear_pointwise.binary 0.0763 ms 33.5% 2025-09-07T08:01:13.5573550Z SingleProcess AUTOTUNE benchmarking takes 0.2695 seconds and 1.3459 seconds precompiling for 2 choices 2025-09-07T08:01:20.5243883Z Autotune Choices Stats: 2025-09-07T08:01:20.5244554Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_73", "best_time": 0.007710999852861278} 2025-09-07T08:01:20.5255073Z AUTOTUNE linear_unary(512x256, 128x256, 128) 2025-09-07T08:01:20.5255565Z strides: [256, 1], [1, 0], [1] 2025-09-07T08:01:20.5256037Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:01:20.5256557Z cpp_CppMicroGemmAMX_73 0.0077 ms 100.0% 2025-09-07T08:01:20.5257003Z _linear_pointwise 0.0518 ms 14.9% 2025-09-07T08:01:20.5257702Z SingleProcess AUTOTUNE benchmarking takes 0.2544 seconds and 1.3282 seconds precompiling for 2 choices 2025-09-07T08:01:22.5272938Z Autotune Choices Stats: 2025-09-07T08:01:22.5273411Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_74", "best_time": 0.6415259999812406} 2025-09-07T08:01:22.5283006Z AUTOTUNE linear_unary(512x128, 30522x128, 30522) 2025-09-07T08:01:22.5283375Z strides: [128, 1], [1, 0], [1] 2025-09-07T08:01:22.5283985Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:01:22.5284348Z cpp_CppMicroGemmAMX_74 0.6415 ms 100.0% 2025-09-07T08:01:22.5284593Z _linear_pointwise 1.1179 ms 57.4% 2025-09-07T08:01:22.5284984Z SingleProcess AUTOTUNE benchmarking takes 0.5788 seconds and 1.3049 seconds precompiling for 2 choices 2025-09-07T08:01:22.8978295Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8982354Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8982571Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8982785Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8983369Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8983590Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8983794Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8984047Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8984259Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8984460Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8984665Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8984873Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8985078Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8985285Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8985488Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8985690Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8985900Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8986110Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8986321Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8986573Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8986789Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8987005Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8987215Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8987420Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8987635Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8987862Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8991621Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8991941Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8992220Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8992475Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8992703Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8992918Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8993145Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8993379Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8993634Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8993857Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8994101Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8994326Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8994551Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8994790Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8995031Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8995253Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8995481Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8995717Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8995946Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8996168Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8996384Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8996620Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8996849Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8997075Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8997294Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8997521Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8997752Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8997974Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8998204Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8998420Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8999050Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8999275Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8999500Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8999718Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.8999947Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9000169Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9000401Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9000630Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9000926Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9001152Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9001369Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9001616Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9001854Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9002066Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9002272Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9002490Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9002701Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9002911Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9003114Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9003331Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9003534Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9003738Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9003938Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9004153Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9004371Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9004571Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9004764Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9004963Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9005162Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9005364Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9005562Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9005767Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9005970Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9006173Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9006374Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9006581Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9006784Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9007189Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9007404Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9007615Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9007825Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9008036Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9008244Z cudagraph partition due to non gpu ops 2025-09-07T08:01:22.9008488Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:01:22.9008894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:01:22.9009228Z return mod(**inputs) 2025-09-07T08:01:22.9009632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1564, in forward 2025-09-07T08:01:22.9010024Z lm_loss = self.loss_function( 2025-09-07T08:01:22.9010391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-09-07T08:01:22.9010861Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-09-07T08:01:22.9011330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-09-07T08:01:22.9011816Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-09-07T08:01:22.9012062Z 2025-09-07T08:01:31.1624844Z Compilation time (from dynamo_timed): 42.303743739 2025-09-07T08:01:31.1663611Z pass 2025-09-07T08:01:31.1665609Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:01:31.1667296Z TIMING: _recursive_pre_grad_passes:0.03866 _recursive_joint_graph_passes:0.47913 _recursive_post_grad_passes:0.08133 linear_unary_template_precompiling:6.7153 linear_unary_template_autotuning:1.60495 linear_binary_template_precompiling:2.6551 linear_binary_template_autotuning:0.52599 async_compile.wait:0.82288 code_gen:7.63458 inductor_compile:33.98748 backend_compile:39.42041 gc:0.00064 entire_frame_compile:42.30374 total_wall_time:42.30374 2025-09-07T08:01:31.1668743Z STATS: call_* op count: 379 | FakeTensorMode.__torch_dispatch__:29649 | FakeTensor.__torch_dispatch__:2889 | ProxyTorchDispatchMode.__torch_dispatch__:8450 2025-09-07T08:01:31.1669481Z Dynamo produced 1 graphs covering 379 ops with 0 graph breaks (0 unique) 2025-09-07T08:01:34.0546321Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:01:34.0547160Z import pynvml # type: ignore[import] 2025-09-07T08:01:36.7035346Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:01:36.7038030Z from pkg_resources import resource_filename 2025-09-07T08:01:37.3638010Z 2025-09-07T08:01:37.5448979Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:01:37.5449344Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:01:37.5449603Z cpu eval ElectraForQuestionAnswering 2025-09-07T08:01:37.6476111Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:01:37.7186080Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:01:37.7815854Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:02:02.4085245Z Autotune Choices Stats: 2025-09-07T08:02:02.4088360Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_73", "best_time": 0.005034999958297703} 2025-09-07T08:02:02.4094394Z AUTOTUNE linear_unary(512x256, 2x256, 2) 2025-09-07T08:02:02.4094891Z strides: [256, 1], [1, 0], [1] 2025-09-07T08:02:02.4095240Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:02:02.4095619Z cpp_CppMicroGemmAMX_73 0.0050 ms 100.0% 2025-09-07T08:02:02.4095941Z _linear_pointwise 0.0355 ms 14.2% 2025-09-07T08:02:02.4096459Z SingleProcess AUTOTUNE benchmarking takes 0.2612 seconds and 1.3234 seconds precompiling for 2 choices 2025-09-07T08:02:02.7827962Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:02.7828462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:02.7832561Z return mod(**inputs) 2025-09-07T08:02:02.7836314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1330, in forward 2025-09-07T08:02:02.7836846Z logits = self.qa_outputs(sequence_output) 2025-09-07T08:02:02.7837019Z 2025-09-07T08:02:02.7837122Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7837408Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7837652Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7837906Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7838142Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7838390Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7838597Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7839202Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7839498Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7839729Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7839954Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7840219Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7840434Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7840642Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7840865Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7841070Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7841369Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7841590Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7841816Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7842035Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7842256Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7842484Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7842707Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7842931Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7843165Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7843400Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7843639Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7843856Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7844078Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7844301Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7844525Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7844742Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7844960Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7845397Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7845626Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7845846Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7846068Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7846292Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7846527Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7846748Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7847093Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7847320Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7847550Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7847767Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7848003Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7848214Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7848431Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7848633Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7848847Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7849057Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7849268Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7849472Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7849682Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7849891Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7850099Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7850298Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7850506Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7850714Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7850921Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7851120Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7851329Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7851537Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7851743Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7851943Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7852152Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7852358Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7852568Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7852794Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7853075Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7853293Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7853506Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7853720Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7853926Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7854143Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7854359Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7854577Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7854865Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7855077Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7855285Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7855492Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7855724Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7855945Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7856156Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7856377Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7856584Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7856797Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7857010Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7857223Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7857426Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7857640Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7857852Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7858067Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7858282Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7858490Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7858702Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7858917Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7859134Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7859343Z cudagraph partition due to non gpu ops 2025-09-07T08:02:02.7859596Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:02.7859995Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:02.7860341Z return mod(**inputs) 2025-09-07T08:02:02.7860741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1348, in forward 2025-09-07T08:02:02.7861199Z start_loss = loss_fct(start_logits, start_positions) 2025-09-07T08:02:02.7861383Z 2025-09-07T08:02:02.7861494Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:02.7861862Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:02.7862194Z return mod(**inputs) 2025-09-07T08:02:02.7862577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/electra/modeling_electra.py", line 1349, in forward 2025-09-07T08:02:02.7862990Z end_loss = loss_fct(end_logits, end_positions) 2025-09-07T08:02:02.7863146Z 2025-09-07T08:02:10.8358401Z Compilation time (from dynamo_timed): 31.967795323 2025-09-07T08:02:10.8367730Z pass 2025-09-07T08:02:10.8368112Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:02:10.8369186Z TIMING: _recursive_pre_grad_passes:0.03766 _recursive_joint_graph_passes:0.47745 _recursive_post_grad_passes:0.08577 linear_unary_template_precompiling:1.33198 linear_binary_template_precompiling:0.00415 linear_unary_template_autotuning:0.25924 async_compile.wait:0.64436 code_gen:7.13869 inductor_compile:22.93383 backend_compile:28.67867 gc:0.00012 entire_frame_compile:31.9678 total_wall_time:31.9678 2025-09-07T08:02:10.8370426Z STATS: call_* op count: 380 | FakeTensorMode.__torch_dispatch__:29443 | FakeTensor.__torch_dispatch__:2912 | ProxyTorchDispatchMode.__torch_dispatch__:8450 2025-09-07T08:02:10.8370923Z Dynamo produced 1 graphs covering 380 ops with 0 graph breaks (0 unique) 2025-09-07T08:02:13.5585600Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:02:13.5588332Z import pynvml # type: ignore[import] 2025-09-07T08:02:16.2474129Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:02:16.2476744Z from pkg_resources import resource_filename 2025-09-07T08:02:16.9444298Z 2025-09-07T08:02:18.2358886Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:02:18.2363140Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:02:18.2363515Z cpu eval GPT2ForSequenceClassification 2025-09-07T08:02:19.1572979Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:02:19.3996899Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:02:19.6139219Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:02:36.8083409Z Autotune Choices Stats: 2025-09-07T08:02:36.8084113Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.20223699993948685} 2025-09-07T08:02:36.8092287Z AUTOTUNE linear_unary(1024x768, 2304x768, 2304) 2025-09-07T08:02:36.8092685Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:02:36.8093080Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:02:36.8093529Z cpp_CppMicroGemmAMX_0 0.2022 ms 100.0% 2025-09-07T08:02:36.8093930Z _linear_pointwise 0.3124 ms 64.7% 2025-09-07T08:02:36.8094576Z SingleProcess AUTOTUNE benchmarking takes 0.3275 seconds and 1.3935 seconds precompiling for 2 choices 2025-09-07T08:02:38.7994447Z Autotune Choices Stats: 2025-09-07T08:02:38.7995654Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_2", "best_time": 0.3955610000048182} 2025-09-07T08:02:38.8003305Z AUTOTUNE linear_unary(1024x768, 3072x768, 3072) 2025-09-07T08:02:38.8003588Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:02:38.8003949Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:02:38.8004238Z cpp_CppMicroGemmAMX_2 0.3956 ms 100.0% 2025-09-07T08:02:38.8007015Z _linear_pointwise 0.4066 ms 97.3% 2025-09-07T08:02:38.8007483Z SingleProcess AUTOTUNE benchmarking takes 0.3366 seconds and 1.3903 seconds precompiling for 2 choices 2025-09-07T08:02:44.7442876Z Autotune Choices Stats: 2025-09-07T08:02:44.7443360Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_48", "best_time": 0.007107999863364967} 2025-09-07T08:02:44.7458175Z AUTOTUNE linear_unary(1024x768, 2x768) 2025-09-07T08:02:44.7458599Z strides: [768, 1], [1, 0] 2025-09-07T08:02:44.7458894Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:02:44.7459212Z cpp_CppMicroGemmAMX_48 0.0071 ms 100.0% 2025-09-07T08:02:44.7459518Z _linear_pointwise 0.0374 ms 19.0% 2025-09-07T08:02:44.7460065Z SingleProcess AUTOTUNE benchmarking takes 0.2686 seconds and 1.3877 seconds precompiling for 2 choices 2025-09-07T08:02:45.4045914Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4047099Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4047481Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4047776Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4048008Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4048240Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4048478Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4048708Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4048929Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4049501Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4049827Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4050066Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4050324Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4050759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4051156Z return mod(**inputs) 2025-09-07T08:02:45.4051616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1509, in forward 2025-09-07T08:02:45.4052205Z last_non_pad_token = (token_indices * non_pad_mask).argmax(-1) 2025-09-07T08:02:45.4052435Z 2025-09-07T08:02:45.4052558Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4052962Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4053330Z return mod(**inputs) 2025-09-07T08:02:45.4053736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4054213Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4054654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4055070Z outputs = block( 2025-09-07T08:02:45.4055432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4055831Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4056252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4056666Z return func(*args, **kwargs) 2025-09-07T08:02:45.4057073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4057526Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4057954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4058366Z return func(*args, **kwargs) 2025-09-07T08:02:45.4058779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4059320Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4059845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4060284Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4060488Z 2025-09-07T08:02:45.4060581Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4060859Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4061101Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4061325Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4061599Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4061991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4062345Z return mod(**inputs) 2025-09-07T08:02:45.4062741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4063175Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4063604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4064009Z outputs = block( 2025-09-07T08:02:45.4064361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4064748Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4065161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4065628Z return func(*args, **kwargs) 2025-09-07T08:02:45.4066035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4066474Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4066890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4067313Z return func(*args, **kwargs) 2025-09-07T08:02:45.4067806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4068254Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4068737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4069406Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4069615Z 2025-09-07T08:02:45.4069732Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4070124Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4070493Z return mod(**inputs) 2025-09-07T08:02:45.4070878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4071332Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4071782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4072187Z outputs = block( 2025-09-07T08:02:45.4072538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4072922Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4073334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4073748Z return func(*args, **kwargs) 2025-09-07T08:02:45.4074150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4074579Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4074994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4075402Z return func(*args, **kwargs) 2025-09-07T08:02:45.4075796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4076239Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4076708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4077207Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4077390Z 2025-09-07T08:02:45.4077504Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4077895Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4078245Z return mod(**inputs) 2025-09-07T08:02:45.4078628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4079063Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4079486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4079888Z outputs = block( 2025-09-07T08:02:45.4080238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4080626Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4081082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4081494Z return func(*args, **kwargs) 2025-09-07T08:02:45.4081904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4082344Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4082818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4083233Z return func(*args, **kwargs) 2025-09-07T08:02:45.4083642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4084078Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4084474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4084930Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4085128Z 2025-09-07T08:02:45.4085247Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4085660Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4086026Z return mod(**inputs) 2025-09-07T08:02:45.4086422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4086978Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4087442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4087866Z outputs = block( 2025-09-07T08:02:45.4088229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4088640Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4089067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4089493Z return func(*args, **kwargs) 2025-09-07T08:02:45.4089904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:02:45.4090361Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:02:45.4090826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:02:45.4091279Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:02:45.4091690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4092134Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4092329Z 2025-09-07T08:02:45.4092450Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4092861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4093230Z return mod(**inputs) 2025-09-07T08:02:45.4093634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4094082Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4094518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4094941Z outputs = block( 2025-09-07T08:02:45.4095304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4095708Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4096118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4096562Z return func(*args, **kwargs) 2025-09-07T08:02:45.4096957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4097360Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4097750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4098131Z return func(*args, **kwargs) 2025-09-07T08:02:45.4098558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4099065Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4099548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4099981Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4100181Z 2025-09-07T08:02:45.4100279Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4100523Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4100762Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4100994Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4101250Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4101659Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4102023Z return mod(**inputs) 2025-09-07T08:02:45.4102481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4102914Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4103313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4103706Z outputs = block( 2025-09-07T08:02:45.4104039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4104420Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4104833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4105225Z return func(*args, **kwargs) 2025-09-07T08:02:45.4105610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4106038Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4106434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4106817Z return func(*args, **kwargs) 2025-09-07T08:02:45.4107192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4107610Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4108076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4108561Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4108758Z 2025-09-07T08:02:45.4108867Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4109241Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4109578Z return mod(**inputs) 2025-09-07T08:02:45.4109965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4110427Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4110915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4111298Z outputs = block( 2025-09-07T08:02:45.4111670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4112034Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4112436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4112840Z return func(*args, **kwargs) 2025-09-07T08:02:45.4113235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4113705Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4114116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4114510Z return func(*args, **kwargs) 2025-09-07T08:02:45.4114895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4115334Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4115808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4116306Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4116489Z 2025-09-07T08:02:45.4116602Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4116993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4117346Z return mod(**inputs) 2025-09-07T08:02:45.4117729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4118166Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4118582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4118984Z outputs = block( 2025-09-07T08:02:45.4119329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4119712Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4120124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4120528Z return func(*args, **kwargs) 2025-09-07T08:02:45.4120925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4121348Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4121771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4122174Z return func(*args, **kwargs) 2025-09-07T08:02:45.4122585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4123027Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4123408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4123842Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4124034Z 2025-09-07T08:02:45.4124150Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4124542Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4124895Z return mod(**inputs) 2025-09-07T08:02:45.4125276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4125694Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4126112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4126547Z outputs = block( 2025-09-07T08:02:45.4126974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4127383Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4127801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4128205Z return func(*args, **kwargs) 2025-09-07T08:02:45.4128645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:02:45.4129085Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:02:45.4129529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:02:45.4129959Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:02:45.4130352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4130785Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4130969Z 2025-09-07T08:02:45.4131084Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4131472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4131831Z return mod(**inputs) 2025-09-07T08:02:45.4132221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4132636Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4133049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4133451Z outputs = block( 2025-09-07T08:02:45.4133799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4134192Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4134597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4134999Z return func(*args, **kwargs) 2025-09-07T08:02:45.4135403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4135805Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4136207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4136588Z return func(*args, **kwargs) 2025-09-07T08:02:45.4136956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4137463Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4137937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4138335Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4138518Z 2025-09-07T08:02:45.4138602Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4138824Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4139044Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4139274Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4139527Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4139921Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4140271Z return mod(**inputs) 2025-09-07T08:02:45.4140662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4141083Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4141522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4141904Z outputs = block( 2025-09-07T08:02:45.4142236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4142607Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4142989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4143425Z return func(*args, **kwargs) 2025-09-07T08:02:45.4143808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4144218Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4144615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4145178Z return func(*args, **kwargs) 2025-09-07T08:02:45.4145574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4145990Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4146446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4146929Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4147127Z 2025-09-07T08:02:45.4147242Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4147616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4147959Z return mod(**inputs) 2025-09-07T08:02:45.4148362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4148762Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4149170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4149549Z outputs = block( 2025-09-07T08:02:45.4149884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4150254Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4150646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4151038Z return func(*args, **kwargs) 2025-09-07T08:02:45.4151417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4151820Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4152208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4152593Z return func(*args, **kwargs) 2025-09-07T08:02:45.4152972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4153382Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4153816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4154276Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4154449Z 2025-09-07T08:02:45.4154555Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4154914Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4155238Z return mod(**inputs) 2025-09-07T08:02:45.4155591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4156090Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4156476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4156846Z outputs = block( 2025-09-07T08:02:45.4157173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4157525Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4157963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4158336Z return func(*args, **kwargs) 2025-09-07T08:02:45.4158699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4159088Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4159471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4159843Z return func(*args, **kwargs) 2025-09-07T08:02:45.4160210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4160608Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4160986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4161422Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4161615Z 2025-09-07T08:02:45.4161731Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4162120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4162475Z return mod(**inputs) 2025-09-07T08:02:45.4162858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4163285Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4163698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4164099Z outputs = block( 2025-09-07T08:02:45.4164439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4164832Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4165245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4165653Z return func(*args, **kwargs) 2025-09-07T08:02:45.4166051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4166468Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4166952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4167367Z return func(*args, **kwargs) 2025-09-07T08:02:45.4167767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4168308Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4168778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4169213Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4169411Z 2025-09-07T08:02:45.4169504Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4169742Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4169967Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4170195Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4170454Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4172352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4172714Z return mod(**inputs) 2025-09-07T08:02:45.4173108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4173542Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4173965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4174416Z outputs = block( 2025-09-07T08:02:45.4174768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4175170Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4175579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4176040Z return func(*args, **kwargs) 2025-09-07T08:02:45.4176448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4176877Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4177299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4177710Z return func(*args, **kwargs) 2025-09-07T08:02:45.4178130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4178568Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4179064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4179596Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4179795Z 2025-09-07T08:02:45.4179919Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4180312Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4180657Z return mod(**inputs) 2025-09-07T08:02:45.4181029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4181455Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4181883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4182264Z outputs = block( 2025-09-07T08:02:45.4182590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4182970Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4183389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4183803Z return func(*args, **kwargs) 2025-09-07T08:02:45.4184199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4184633Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4185055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4185438Z return func(*args, **kwargs) 2025-09-07T08:02:45.4185820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4186231Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4186689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4187158Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4187364Z 2025-09-07T08:02:45.4187481Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4187849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4188175Z return mod(**inputs) 2025-09-07T08:02:45.4188539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4188934Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4189374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4189743Z outputs = block( 2025-09-07T08:02:45.4190072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4190437Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4190826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4191230Z return func(*args, **kwargs) 2025-09-07T08:02:45.4191618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4192042Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4192457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4192862Z return func(*args, **kwargs) 2025-09-07T08:02:45.4193225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4193621Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4193985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4194390Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4194571Z 2025-09-07T08:02:45.4194688Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4195053Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4195386Z return mod(**inputs) 2025-09-07T08:02:45.4195757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4196151Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4196538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4196897Z outputs = block( 2025-09-07T08:02:45.4197219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4197574Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4197948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4198309Z return func(*args, **kwargs) 2025-09-07T08:02:45.4198671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:02:45.4199083Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:02:45.4199497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:02:45.4199914Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:02:45.4200300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4200725Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4200915Z 2025-09-07T08:02:45.4201036Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4201436Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4201845Z return mod(**inputs) 2025-09-07T08:02:45.4202234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4202672Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4203101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4203522Z outputs = block( 2025-09-07T08:02:45.4203906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4204312Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4204715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4205116Z return func(*args, **kwargs) 2025-09-07T08:02:45.4205513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4205945Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4206378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4206858Z return func(*args, **kwargs) 2025-09-07T08:02:45.4207290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4207849Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4208379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4208774Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4208956Z 2025-09-07T08:02:45.4209040Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4209262Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4209475Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4209696Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4209940Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4210314Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4210656Z return mod(**inputs) 2025-09-07T08:02:45.4211024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4211432Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4211833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4212214Z outputs = block( 2025-09-07T08:02:45.4212540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4212916Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4213304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4213692Z return func(*args, **kwargs) 2025-09-07T08:02:45.4214072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4214473Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4214875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4215270Z return func(*args, **kwargs) 2025-09-07T08:02:45.4215649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4216055Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4216510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4217055Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4217239Z 2025-09-07T08:02:45.4217356Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4217726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4218055Z return mod(**inputs) 2025-09-07T08:02:45.4218460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4218866Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4219261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4219640Z outputs = block( 2025-09-07T08:02:45.4219961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4220336Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4220722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4221105Z return func(*args, **kwargs) 2025-09-07T08:02:45.4221473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4221878Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4222282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4222661Z return func(*args, **kwargs) 2025-09-07T08:02:45.4223039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4223448Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4223902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4224371Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4224538Z 2025-09-07T08:02:45.4224656Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4225023Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4225352Z return mod(**inputs) 2025-09-07T08:02:45.4225723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4226121Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4226515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4226888Z outputs = block( 2025-09-07T08:02:45.4227222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4227639Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4228042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4228416Z return func(*args, **kwargs) 2025-09-07T08:02:45.4228776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4229174Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4229570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4229943Z return func(*args, **kwargs) 2025-09-07T08:02:45.4230307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4230704Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4231118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4231529Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4231704Z 2025-09-07T08:02:45.4231823Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4232183Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4232525Z return mod(**inputs) 2025-09-07T08:02:45.4232915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4233305Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4233690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4234052Z outputs = block( 2025-09-07T08:02:45.4234373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4234738Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4235114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4235481Z return func(*args, **kwargs) 2025-09-07T08:02:45.4235853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4236250Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4236634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4237001Z return func(*args, **kwargs) 2025-09-07T08:02:45.4237366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4237867Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4238342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4238751Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4238923Z 2025-09-07T08:02:45.4239014Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4239228Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4239448Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4239670Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4239915Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4240278Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4240609Z return mod(**inputs) 2025-09-07T08:02:45.4240978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4241381Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4241787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4242191Z outputs = block( 2025-09-07T08:02:45.4242539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4242930Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4243341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4243735Z return func(*args, **kwargs) 2025-09-07T08:02:45.4244133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4244563Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4244979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4245616Z return func(*args, **kwargs) 2025-09-07T08:02:45.4246008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4246469Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4247005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4247606Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4247814Z 2025-09-07T08:02:45.4247940Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4248343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4248716Z return mod(**inputs) 2025-09-07T08:02:45.4249144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4249586Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4250002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4250420Z outputs = block( 2025-09-07T08:02:45.4250776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4251171Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4251592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4251991Z return func(*args, **kwargs) 2025-09-07T08:02:45.4252430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4252872Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4253296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4253692Z return func(*args, **kwargs) 2025-09-07T08:02:45.4254098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4254536Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4255021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4255516Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4255696Z 2025-09-07T08:02:45.4255810Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4256205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4256558Z return mod(**inputs) 2025-09-07T08:02:45.4256944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4257349Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4257738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4258116Z outputs = block( 2025-09-07T08:02:45.4258446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4258815Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4259200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4259603Z return func(*args, **kwargs) 2025-09-07T08:02:45.4260000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4260404Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4260845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4261244Z return func(*args, **kwargs) 2025-09-07T08:02:45.4261641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4262061Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4262450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4262925Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4263121Z 2025-09-07T08:02:45.4263231Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4263601Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4263934Z return mod(**inputs) 2025-09-07T08:02:45.4264307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4264709Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4265103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4265481Z outputs = block( 2025-09-07T08:02:45.4265812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4266183Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4266565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4266948Z return func(*args, **kwargs) 2025-09-07T08:02:45.4267328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:02:45.4267745Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:02:45.4268162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:02:45.4268552Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:02:45.4268923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4269327Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4269502Z 2025-09-07T08:02:45.4269620Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4269985Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4270320Z return mod(**inputs) 2025-09-07T08:02:45.4270687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4271090Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4271488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4271860Z outputs = block( 2025-09-07T08:02:45.4272184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4272540Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4272914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4273282Z return func(*args, **kwargs) 2025-09-07T08:02:45.4273645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4274035Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4274419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4274817Z return func(*args, **kwargs) 2025-09-07T08:02:45.4275202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4275701Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4276171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4276591Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4276760Z 2025-09-07T08:02:45.4276876Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4277091Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4277306Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4277517Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4277752Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4278108Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4278442Z return mod(**inputs) 2025-09-07T08:02:45.4278818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4279227Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4279635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4280031Z outputs = block( 2025-09-07T08:02:45.4280388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4280787Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4281204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4281603Z return func(*args, **kwargs) 2025-09-07T08:02:45.4282006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4282443Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4282866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4283271Z return func(*args, **kwargs) 2025-09-07T08:02:45.4283666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4284113Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4284599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4285123Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4285321Z 2025-09-07T08:02:45.4285447Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4285837Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4286194Z return mod(**inputs) 2025-09-07T08:02:45.4286589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4287096Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4287518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4287927Z outputs = block( 2025-09-07T08:02:45.4288283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4288679Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4289103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4289505Z return func(*args, **kwargs) 2025-09-07T08:02:45.4289943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4290393Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4290814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4291209Z return func(*args, **kwargs) 2025-09-07T08:02:45.4291606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4292075Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4292562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4293061Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4293241Z 2025-09-07T08:02:45.4293359Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4293762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4294127Z return mod(**inputs) 2025-09-07T08:02:45.4294529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4294961Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4295356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4295737Z outputs = block( 2025-09-07T08:02:45.4296074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4296461Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4296849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4297252Z return func(*args, **kwargs) 2025-09-07T08:02:45.4297629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4298031Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4298425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4298804Z return func(*args, **kwargs) 2025-09-07T08:02:45.4299194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4299600Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4299983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4300393Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4300573Z 2025-09-07T08:02:45.4300683Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4301065Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4301406Z return mod(**inputs) 2025-09-07T08:02:45.4301809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4302253Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4302688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4303110Z outputs = block( 2025-09-07T08:02:45.4303454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4303837Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4304215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4304608Z return func(*args, **kwargs) 2025-09-07T08:02:45.4304989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4305383Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4305760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4306131Z return func(*args, **kwargs) 2025-09-07T08:02:45.4306568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4307064Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4307523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4307915Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4308096Z 2025-09-07T08:02:45.4308181Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4308399Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4308619Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4308829Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4309060Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4309422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4309750Z return mod(**inputs) 2025-09-07T08:02:45.4310116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4310514Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4310915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4311296Z outputs = block( 2025-09-07T08:02:45.4311626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4312008Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4312379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4312756Z return func(*args, **kwargs) 2025-09-07T08:02:45.4313132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4313538Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4313927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4314306Z return func(*args, **kwargs) 2025-09-07T08:02:45.4314679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4315090Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4315547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4316031Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4316222Z 2025-09-07T08:02:45.4316333Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4316704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4317042Z return mod(**inputs) 2025-09-07T08:02:45.4317416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4317813Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4318214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4318595Z outputs = block( 2025-09-07T08:02:45.4318972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4319337Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4319758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4320164Z return func(*args, **kwargs) 2025-09-07T08:02:45.4320564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4321057Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4321481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4321957Z return func(*args, **kwargs) 2025-09-07T08:02:45.4322423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4322873Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4323363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4323861Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4324053Z 2025-09-07T08:02:45.4324169Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4324568Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4324935Z return mod(**inputs) 2025-09-07T08:02:45.4325328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4325777Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4326205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4326624Z outputs = block( 2025-09-07T08:02:45.4327060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4327464Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4327884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4328304Z return func(*args, **kwargs) 2025-09-07T08:02:45.4328720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4329153Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4329543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4329932Z return func(*args, **kwargs) 2025-09-07T08:02:45.4330338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4330780Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4331178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4331626Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4331825Z 2025-09-07T08:02:45.4331945Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4332351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4332719Z return mod(**inputs) 2025-09-07T08:02:45.4333118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4333560Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4334000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4334468Z outputs = block( 2025-09-07T08:02:45.4334842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4335247Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4335672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4336096Z return func(*args, **kwargs) 2025-09-07T08:02:45.4336548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:02:45.4336997Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:02:45.4337449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:02:45.4337880Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:02:45.4338269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4338676Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4338850Z 2025-09-07T08:02:45.4338958Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4339329Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4339660Z return mod(**inputs) 2025-09-07T08:02:45.4340024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4340421Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4340835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4341238Z outputs = block( 2025-09-07T08:02:45.4341587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4341980Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4342380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4342779Z return func(*args, **kwargs) 2025-09-07T08:02:45.4343151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4343560Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4343957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4344325Z return func(*args, **kwargs) 2025-09-07T08:02:45.4344699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4345412Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4345890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4346289Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4346470Z 2025-09-07T08:02:45.4346653Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4346875Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4347092Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4347305Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4347539Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4347910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4348246Z return mod(**inputs) 2025-09-07T08:02:45.4348617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4349012Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4349467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4349874Z outputs = block( 2025-09-07T08:02:45.4350203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4350574Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4350957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4351389Z return func(*args, **kwargs) 2025-09-07T08:02:45.4351762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4352162Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4352573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4352935Z return func(*args, **kwargs) 2025-09-07T08:02:45.4353299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4353699Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4354140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4354605Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4354792Z 2025-09-07T08:02:45.4354901Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4355257Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4355579Z return mod(**inputs) 2025-09-07T08:02:45.4355935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4356319Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4356706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4357073Z outputs = block( 2025-09-07T08:02:45.4357393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4357751Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4358120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4358489Z return func(*args, **kwargs) 2025-09-07T08:02:45.4358852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4359248Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4359629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4360009Z return func(*args, **kwargs) 2025-09-07T08:02:45.4360390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4360828Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4361298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4361779Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4361965Z 2025-09-07T08:02:45.4362077Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4362468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4362828Z return mod(**inputs) 2025-09-07T08:02:45.4363217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4363696Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4364124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4364543Z outputs = block( 2025-09-07T08:02:45.4364899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4365293Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4365743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4366154Z return func(*args, **kwargs) 2025-09-07T08:02:45.4366561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4367073Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4367500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4367918Z return func(*args, **kwargs) 2025-09-07T08:02:45.4368327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4368769Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4369160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4369612Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4369818Z 2025-09-07T08:02:45.4369938Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4370347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4370713Z return mod(**inputs) 2025-09-07T08:02:45.4371108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4371553Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4371982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4372396Z outputs = block( 2025-09-07T08:02:45.4372753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4373149Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4373574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4373989Z return func(*args, **kwargs) 2025-09-07T08:02:45.4374406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4374801Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4375204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4375572Z return func(*args, **kwargs) 2025-09-07T08:02:45.4375938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4376427Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4376880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4377279Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4377456Z 2025-09-07T08:02:45.4377537Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4377754Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4377967Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4378166Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4378401Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4378814Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4379140Z return mod(**inputs) 2025-09-07T08:02:45.4379501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4379901Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4380298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4380709Z outputs = block( 2025-09-07T08:02:45.4381037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4381406Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4381806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4382224Z return func(*args, **kwargs) 2025-09-07T08:02:45.4382621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4383047Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4383464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4383862Z return func(*args, **kwargs) 2025-09-07T08:02:45.4384227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4384623Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4385056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4385527Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4385714Z 2025-09-07T08:02:45.4385823Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4386180Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4386496Z return mod(**inputs) 2025-09-07T08:02:45.4386855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4387245Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4387630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4387997Z outputs = block( 2025-09-07T08:02:45.4388309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4388671Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4389047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4389418Z return func(*args, **kwargs) 2025-09-07T08:02:45.4389785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4390181Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4390588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4391000Z return func(*args, **kwargs) 2025-09-07T08:02:45.4391418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4391843Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4392281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4392732Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4392910Z 2025-09-07T08:02:45.4393037Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4393393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4393708Z return mod(**inputs) 2025-09-07T08:02:45.4394068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4394468Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4394897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4395265Z outputs = block( 2025-09-07T08:02:45.4395581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4395935Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4396301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4396666Z return func(*args, **kwargs) 2025-09-07T08:02:45.4397013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4397394Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4397771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4398136Z return func(*args, **kwargs) 2025-09-07T08:02:45.4398498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4398872Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4399225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4399621Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4399801Z 2025-09-07T08:02:45.4399918Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4400296Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4400646Z return mod(**inputs) 2025-09-07T08:02:45.4401054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4401498Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4401930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4402342Z outputs = block( 2025-09-07T08:02:45.4402701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4403109Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4403531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4403955Z return func(*args, **kwargs) 2025-09-07T08:02:45.4404355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:02:45.4404851Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:02:45.4405311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:02:45.4405756Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:02:45.4406157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4406598Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4406870Z 2025-09-07T08:02:45.4406998Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4407409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4407821Z return mod(**inputs) 2025-09-07T08:02:45.4408226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4408695Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4409126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4409554Z outputs = block( 2025-09-07T08:02:45.4409942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4410345Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4410799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4411221Z return func(*args, **kwargs) 2025-09-07T08:02:45.4411630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4412065Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4412545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4412963Z return func(*args, **kwargs) 2025-09-07T08:02:45.4413367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4413924Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4414431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4414872Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4415069Z 2025-09-07T08:02:45.4415163Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4415404Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4415645Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4415867Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4416139Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4416504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4416832Z return mod(**inputs) 2025-09-07T08:02:45.4417191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4417595Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4417986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4418360Z outputs = block( 2025-09-07T08:02:45.4418690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4419051Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4419428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4419795Z return func(*args, **kwargs) 2025-09-07T08:02:45.4420159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4420544Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4420960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4421359Z return func(*args, **kwargs) 2025-09-07T08:02:45.4421758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4422201Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4422674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4423234Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4423451Z 2025-09-07T08:02:45.4423561Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4423923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4424244Z return mod(**inputs) 2025-09-07T08:02:45.4424647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4425078Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4425499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4425906Z outputs = block( 2025-09-07T08:02:45.4426227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4426599Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4426984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4427368Z return func(*args, **kwargs) 2025-09-07T08:02:45.4427743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4428145Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4428532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4428899Z return func(*args, **kwargs) 2025-09-07T08:02:45.4429273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4429686Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4430169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4430665Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4430841Z 2025-09-07T08:02:45.4430967Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4431361Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4431717Z return mod(**inputs) 2025-09-07T08:02:45.4432090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4432513Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4432932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4433331Z outputs = block( 2025-09-07T08:02:45.4433672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4434068Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4434477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4434878Z return func(*args, **kwargs) 2025-09-07T08:02:45.4435266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4435692Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4436112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4436512Z return func(*args, **kwargs) 2025-09-07T08:02:45.4436905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4437318Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4437755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4438185Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4438368Z 2025-09-07T08:02:45.4438491Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4438881Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4439223Z return mod(**inputs) 2025-09-07T08:02:45.4439642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4440080Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4440480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4440861Z outputs = block( 2025-09-07T08:02:45.4441219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4441618Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4442025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4442429Z return func(*args, **kwargs) 2025-09-07T08:02:45.4442816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4443240Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4443657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4444059Z return func(*args, **kwargs) 2025-09-07T08:02:45.4444446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:02:45.4444977Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:02:45.4445621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4446059Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4446246Z 2025-09-07T08:02:45.4446346Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4446575Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4446872Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4447120Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4447383Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4447780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4448145Z return mod(**inputs) 2025-09-07T08:02:45.4448423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4448515Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4448765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4448841Z outputs = block( 2025-09-07T08:02:45.4449069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4449161Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4449406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4449488Z return func(*args, **kwargs) 2025-09-07T08:02:45.4449735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4449828Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4450086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4450238Z return func(*args, **kwargs) 2025-09-07T08:02:45.4450507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4450614Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4450921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:02:45.4451122Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:02:45.4451127Z 2025-09-07T08:02:45.4451243Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4451467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4451541Z return mod(**inputs) 2025-09-07T08:02:45.4451817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4451920Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4452167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4452243Z outputs = block( 2025-09-07T08:02:45.4452465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4452552Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4452797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4452868Z return func(*args, **kwargs) 2025-09-07T08:02:45.4453124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4453215Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4453464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4453538Z return func(*args, **kwargs) 2025-09-07T08:02:45.4453784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:02:45.4453893Z attn_output, attn_weights = attention_interface( 2025-09-07T08:02:45.4454184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:02:45.4454308Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:02:45.4454312Z 2025-09-07T08:02:45.4454421Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4454629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4454697Z return mod(**inputs) 2025-09-07T08:02:45.4454949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4455043Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4455289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4455360Z outputs = block( 2025-09-07T08:02:45.4455584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4455665Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4455917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4455989Z return func(*args, **kwargs) 2025-09-07T08:02:45.4456243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:02:45.4456333Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:02:45.4456610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4456694Z return func(*args, **kwargs) 2025-09-07T08:02:45.4456940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:02:45.4457031Z attn_output = self.c_proj(attn_output) 2025-09-07T08:02:45.4457250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4457411Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4457415Z 2025-09-07T08:02:45.4457523Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4457725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4457803Z return mod(**inputs) 2025-09-07T08:02:45.4458055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:02:45.4458151Z transformer_outputs = self.transformer( 2025-09-07T08:02:45.4458399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:02:45.4458464Z outputs = block( 2025-09-07T08:02:45.4458696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:02:45.4458778Z return super().__call__(*args, **kwargs) 2025-09-07T08:02:45.4459028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:02:45.4459100Z return func(*args, **kwargs) 2025-09-07T08:02:45.4459354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:02:45.4459476Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:02:45.4459740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:02:45.4459843Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:02:45.4460074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:02:45.4460211Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:02:45.4460215Z 2025-09-07T08:02:45.4460302Z cudagraph partition due to non gpu ops 2025-09-07T08:02:45.4460418Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4460653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4460725Z return mod(**inputs) 2025-09-07T08:02:45.4460997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-09-07T08:02:45.4461157Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:02:45.4461162Z 2025-09-07T08:02:45.4461272Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:02:45.4461490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:02:45.4461562Z return mod(**inputs) 2025-09-07T08:02:45.4461836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-09-07T08:02:45.4461991Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:02:45.4461995Z 2025-09-07T08:03:02.9679784Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9680195Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9686089Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9686459Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9686720Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9687441Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9687792Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9688033Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9688274Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9688506Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9688734Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9688955Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9689226Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9689788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9690178Z return mod(**inputs) 2025-09-07T08:03:02.9690624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1509, in forward 2025-09-07T08:03:02.9691133Z last_non_pad_token = (token_indices * non_pad_mask).argmax(-1) 2025-09-07T08:03:02.9691354Z 2025-09-07T08:03:02.9691481Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9691906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9692274Z return mod(**inputs) 2025-09-07T08:03:02.9692666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9693121Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9693557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9693977Z outputs = block( 2025-09-07T08:03:02.9694351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9694758Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9695178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9695605Z return func(*args, **kwargs) 2025-09-07T08:03:02.9696012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9696447Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9696875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9697303Z return func(*args, **kwargs) 2025-09-07T08:03:02.9697708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9698255Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9698753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9699192Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9699392Z 2025-09-07T08:03:02.9699482Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9699715Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9699939Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9700170Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9700424Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9700822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9701186Z return mod(**inputs) 2025-09-07T08:03:02.9701588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9702029Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9702456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9702869Z outputs = block( 2025-09-07T08:03:02.9703257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9703713Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9704150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9704581Z return func(*args, **kwargs) 2025-09-07T08:03:02.9705016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9705509Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9705943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9706345Z return func(*args, **kwargs) 2025-09-07T08:03:02.9706748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9707190Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9707679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9708225Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9708426Z 2025-09-07T08:03:02.9708552Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9708945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9709310Z return mod(**inputs) 2025-09-07T08:03:02.9709713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9710144Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9710564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9710976Z outputs = block( 2025-09-07T08:03:02.9711367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9711782Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9712190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9712591Z return func(*args, **kwargs) 2025-09-07T08:03:02.9712986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9713413Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9713842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9714246Z return func(*args, **kwargs) 2025-09-07T08:03:02.9714645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9715071Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9715547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9716039Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9716216Z 2025-09-07T08:03:02.9716339Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9716736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9717084Z return mod(**inputs) 2025-09-07T08:03:02.9717472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9717909Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9718327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9718774Z outputs = block( 2025-09-07T08:03:02.9719122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9719520Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9719926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9720326Z return func(*args, **kwargs) 2025-09-07T08:03:02.9720758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9721199Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9721628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9722042Z return func(*args, **kwargs) 2025-09-07T08:03:02.9722453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9722889Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9723294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9723740Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9723933Z 2025-09-07T08:03:02.9724061Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9724461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9724826Z return mod(**inputs) 2025-09-07T08:03:02.9725229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9725673Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9726105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9726517Z outputs = block( 2025-09-07T08:03:02.9727190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9727610Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9728037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9728447Z return func(*args, **kwargs) 2025-09-07T08:03:02.9728866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:03:02.9729327Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:03:02.9729789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:03:02.9730229Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:03:02.9730631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9731079Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9731280Z 2025-09-07T08:03:02.9731397Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9731802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9732166Z return mod(**inputs) 2025-09-07T08:03:02.9732564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9733004Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9733435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9733850Z outputs = block( 2025-09-07T08:03:02.9734205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9734667Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9735096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9735515Z return func(*args, **kwargs) 2025-09-07T08:03:02.9735926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9736370Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9736834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9737247Z return func(*args, **kwargs) 2025-09-07T08:03:02.9737610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9738112Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9738583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9738991Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9739173Z 2025-09-07T08:03:02.9739268Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9739499Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9739732Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9739964Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9740223Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9740609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9740964Z return mod(**inputs) 2025-09-07T08:03:02.9741357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9741769Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9742169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9742541Z outputs = block( 2025-09-07T08:03:02.9742871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9743241Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9743627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9744169Z return func(*args, **kwargs) 2025-09-07T08:03:02.9744585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9744991Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9745592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9745985Z return func(*args, **kwargs) 2025-09-07T08:03:02.9746358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9746800Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9747254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9747791Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9747983Z 2025-09-07T08:03:02.9748100Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9748473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9748814Z return mod(**inputs) 2025-09-07T08:03:02.9749179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9749665Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9750068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9750449Z outputs = block( 2025-09-07T08:03:02.9750790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9751182Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9751624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9751988Z return func(*args, **kwargs) 2025-09-07T08:03:02.9752353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9752744Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9753127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9753500Z return func(*args, **kwargs) 2025-09-07T08:03:02.9753859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9754257Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9754695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9755181Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9755342Z 2025-09-07T08:03:02.9755453Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9755806Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9756133Z return mod(**inputs) 2025-09-07T08:03:02.9756486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9756881Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9757259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9757627Z outputs = block( 2025-09-07T08:03:02.9757950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9758310Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9758687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9759049Z return func(*args, **kwargs) 2025-09-07T08:03:02.9759413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9759804Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9760183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9760554Z return func(*args, **kwargs) 2025-09-07T08:03:02.9760918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9761322Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9761673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9762076Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9762252Z 2025-09-07T08:03:02.9762360Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9762722Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9763051Z return mod(**inputs) 2025-09-07T08:03:02.9763415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9763883Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9764302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9764711Z outputs = block( 2025-09-07T08:03:02.9765067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9765464Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9765908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9766311Z return func(*args, **kwargs) 2025-09-07T08:03:02.9766707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:03:02.9767235Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:03:02.9767694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:03:02.9768136Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:03:02.9768530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9768975Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9769163Z 2025-09-07T08:03:02.9769288Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9769686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9770045Z return mod(**inputs) 2025-09-07T08:03:02.9770440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9770879Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9771298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9771696Z outputs = block( 2025-09-07T08:03:02.9772047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9772439Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9772848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9773247Z return func(*args, **kwargs) 2025-09-07T08:03:02.9773639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9774068Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9774527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9774934Z return func(*args, **kwargs) 2025-09-07T08:03:02.9775334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9775920Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9776420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9776831Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9777005Z 2025-09-07T08:03:02.9777099Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9777318Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9777537Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9777748Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9777995Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9778358Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9778757Z return mod(**inputs) 2025-09-07T08:03:02.9779143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9779547Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9779939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9780311Z outputs = block( 2025-09-07T08:03:02.9780671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9781048Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9781433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9781811Z return func(*args, **kwargs) 2025-09-07T08:03:02.9782177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9782584Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9782980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9783357Z return func(*args, **kwargs) 2025-09-07T08:03:02.9783724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9784139Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9784599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9785090Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9785274Z 2025-09-07T08:03:02.9785390Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9785752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9786090Z return mod(**inputs) 2025-09-07T08:03:02.9786455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9786859Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9787254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9787624Z outputs = block( 2025-09-07T08:03:02.9787965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9788324Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9788699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9789074Z return func(*args, **kwargs) 2025-09-07T08:03:02.9789431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9789817Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9790188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9790553Z return func(*args, **kwargs) 2025-09-07T08:03:02.9790912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9791317Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9791762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9792218Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9792381Z 2025-09-07T08:03:02.9792492Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9792852Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9793225Z return mod(**inputs) 2025-09-07T08:03:02.9793582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9793978Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9794367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9794742Z outputs = block( 2025-09-07T08:03:02.9795086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9795440Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9795805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9796156Z return func(*args, **kwargs) 2025-09-07T08:03:02.9796513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9796899Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9797268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9797625Z return func(*args, **kwargs) 2025-09-07T08:03:02.9797970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9798346Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9798697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9799093Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9799262Z 2025-09-07T08:03:02.9799369Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9799737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9800077Z return mod(**inputs) 2025-09-07T08:03:02.9800447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9800850Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9801236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9801613Z outputs = block( 2025-09-07T08:03:02.9801947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9802318Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9802697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9803080Z return func(*args, **kwargs) 2025-09-07T08:03:02.9803456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9803864Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9804259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9804653Z return func(*args, **kwargs) 2025-09-07T08:03:02.9805050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9805551Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9806023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9806426Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9806604Z 2025-09-07T08:03:02.9806696Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9807100Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9807347Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9807585Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9807842Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9808254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9808591Z return mod(**inputs) 2025-09-07T08:03:02.9809006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9809412Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9809809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9810197Z outputs = block( 2025-09-07T08:03:02.9810530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9810905Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9811284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9811663Z return func(*args, **kwargs) 2025-09-07T08:03:02.9812039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9812440Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9812838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9813210Z return func(*args, **kwargs) 2025-09-07T08:03:02.9813582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9813991Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9814446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9814930Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9815106Z 2025-09-07T08:03:02.9815208Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9815566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9815886Z return mod(**inputs) 2025-09-07T08:03:02.9816250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9816625Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9816998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9817365Z outputs = block( 2025-09-07T08:03:02.9817690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9818056Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9818417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9818777Z return func(*args, **kwargs) 2025-09-07T08:03:02.9819136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9819518Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9819887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9820248Z return func(*args, **kwargs) 2025-09-07T08:03:02.9820611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9821011Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9821534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9821977Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9822144Z 2025-09-07T08:03:02.9822250Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9822616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9822944Z return mod(**inputs) 2025-09-07T08:03:02.9823331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9823710Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9824089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9824448Z outputs = block( 2025-09-07T08:03:02.9824763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9825107Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9825472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9825830Z return func(*args, **kwargs) 2025-09-07T08:03:02.9826186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9826566Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9826937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9827295Z return func(*args, **kwargs) 2025-09-07T08:03:02.9827648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9828026Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9828377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9828759Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9828937Z 2025-09-07T08:03:02.9829041Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9829395Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9829711Z return mod(**inputs) 2025-09-07T08:03:02.9830057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9830439Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9830812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9831169Z outputs = block( 2025-09-07T08:03:02.9831480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9831823Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9832190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9832549Z return func(*args, **kwargs) 2025-09-07T08:03:02.9832902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:03:02.9833302Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:03:02.9833686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:03:02.9834069Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:03:02.9834426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9834834Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9835017Z 2025-09-07T08:03:02.9835119Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9835473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9835791Z return mod(**inputs) 2025-09-07T08:03:02.9836145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9836562Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9836982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9837333Z outputs = block( 2025-09-07T08:03:02.9837637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9837977Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9838331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9838683Z return func(*args, **kwargs) 2025-09-07T08:03:02.9839038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9839404Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9839795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9840151Z return func(*args, **kwargs) 2025-09-07T08:03:02.9840508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9840982Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9841429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9841816Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9841979Z 2025-09-07T08:03:02.9842060Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9842273Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9842478Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9842683Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9842903Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9843255Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9843579Z return mod(**inputs) 2025-09-07T08:03:02.9843941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9844335Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9844712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9845227Z outputs = block( 2025-09-07T08:03:02.9845553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9845922Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9846302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9846685Z return func(*args, **kwargs) 2025-09-07T08:03:02.9847123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9847552Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9847977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9848373Z return func(*args, **kwargs) 2025-09-07T08:03:02.9848797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9849220Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9849666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9850141Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9850321Z 2025-09-07T08:03:02.9850426Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9850847Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9851171Z return mod(**inputs) 2025-09-07T08:03:02.9851534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9851908Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9852285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9852646Z outputs = block( 2025-09-07T08:03:02.9852965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9853362Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9853719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9854082Z return func(*args, **kwargs) 2025-09-07T08:03:02.9854440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9854824Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9855192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9855548Z return func(*args, **kwargs) 2025-09-07T08:03:02.9855902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9856289Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9856716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9857149Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9857311Z 2025-09-07T08:03:02.9857415Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9857764Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9858083Z return mod(**inputs) 2025-09-07T08:03:02.9858427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9858798Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9859176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9859532Z outputs = block( 2025-09-07T08:03:02.9859848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9860199Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9860575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9860953Z return func(*args, **kwargs) 2025-09-07T08:03:02.9861309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9861691Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9862055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9862452Z return func(*args, **kwargs) 2025-09-07T08:03:02.9862816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9863201Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9863560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9863957Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9864134Z 2025-09-07T08:03:02.9864269Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9864622Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9864938Z return mod(**inputs) 2025-09-07T08:03:02.9865277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9865658Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9866037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9866399Z outputs = block( 2025-09-07T08:03:02.9866712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9867053Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9867419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9867781Z return func(*args, **kwargs) 2025-09-07T08:03:02.9868138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9868518Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9868888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9869262Z return func(*args, **kwargs) 2025-09-07T08:03:02.9869631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9870122Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9870577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9870972Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9871164Z 2025-09-07T08:03:02.9871244Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9871459Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9871665Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9871870Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9872105Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9872470Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9872799Z return mod(**inputs) 2025-09-07T08:03:02.9873164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9873559Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9873944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9874315Z outputs = block( 2025-09-07T08:03:02.9874639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9874993Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9875373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9875741Z return func(*args, **kwargs) 2025-09-07T08:03:02.9876105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9876529Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9876913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9877280Z return func(*args, **kwargs) 2025-09-07T08:03:02.9877643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9878075Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9878505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9878977Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9879164Z 2025-09-07T08:03:02.9879269Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9879628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9879953Z return mod(**inputs) 2025-09-07T08:03:02.9880305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9880698Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9881093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9881475Z outputs = block( 2025-09-07T08:03:02.9881787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9882142Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9882522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9882902Z return func(*args, **kwargs) 2025-09-07T08:03:02.9883278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9883671Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9884062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9884437Z return func(*args, **kwargs) 2025-09-07T08:03:02.9884815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9885223Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9885663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9886130Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9886302Z 2025-09-07T08:03:02.9886410Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9886779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9887193Z return mod(**inputs) 2025-09-07T08:03:02.9887601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9888050Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9888450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9888841Z outputs = block( 2025-09-07T08:03:02.9889159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9889532Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9889919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9890326Z return func(*args, **kwargs) 2025-09-07T08:03:02.9890711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9891114Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9891509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9891885Z return func(*args, **kwargs) 2025-09-07T08:03:02.9892304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9892692Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9893056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9893461Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9893635Z 2025-09-07T08:03:02.9893752Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9894133Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9894461Z return mod(**inputs) 2025-09-07T08:03:02.9894829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9895232Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9895649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9896043Z outputs = block( 2025-09-07T08:03:02.9896393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9896784Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9897168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9897548Z return func(*args, **kwargs) 2025-09-07T08:03:02.9897919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:03:02.9898334Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:03:02.9898748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:03:02.9899149Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:03:02.9899518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9899944Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9900134Z 2025-09-07T08:03:02.9900258Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9900627Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9900962Z return mod(**inputs) 2025-09-07T08:03:02.9901320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9901720Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9902114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9902489Z outputs = block( 2025-09-07T08:03:02.9902822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9903182Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9903565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9903946Z return func(*args, **kwargs) 2025-09-07T08:03:02.9904319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9904772Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9905161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9905541Z return func(*args, **kwargs) 2025-09-07T08:03:02.9905916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9906420Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9906938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9907346Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9907526Z 2025-09-07T08:03:02.9907622Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9907843Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9908052Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9908255Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9908489Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9908849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9909173Z return mod(**inputs) 2025-09-07T08:03:02.9909524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9909921Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9910307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9910676Z outputs = block( 2025-09-07T08:03:02.9911008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9911383Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9911761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9912133Z return func(*args, **kwargs) 2025-09-07T08:03:02.9912497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9912880Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9913262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9913632Z return func(*args, **kwargs) 2025-09-07T08:03:02.9913995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9914393Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9914823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9915298Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9915484Z 2025-09-07T08:03:02.9915588Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9915943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9916267Z return mod(**inputs) 2025-09-07T08:03:02.9916622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9917012Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9917391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9917757Z outputs = block( 2025-09-07T08:03:02.9918070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9918451Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9918844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9919220Z return func(*args, **kwargs) 2025-09-07T08:03:02.9919581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9919969Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9920387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9920756Z return func(*args, **kwargs) 2025-09-07T08:03:02.9921119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9921516Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9921948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9922401Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9922569Z 2025-09-07T08:03:02.9922674Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9923030Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9923352Z return mod(**inputs) 2025-09-07T08:03:02.9923722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9924121Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9924514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9924891Z outputs = block( 2025-09-07T08:03:02.9925212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9925587Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9925990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9926390Z return func(*args, **kwargs) 2025-09-07T08:03:02.9926775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9927301Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9927725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9928135Z return func(*args, **kwargs) 2025-09-07T08:03:02.9928514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9928902Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9929270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9929682Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9929859Z 2025-09-07T08:03:02.9929979Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9930354Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9930685Z return mod(**inputs) 2025-09-07T08:03:02.9931054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9931453Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9931846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9932214Z outputs = block( 2025-09-07T08:03:02.9932541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9932947Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9933334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9933713Z return func(*args, **kwargs) 2025-09-07T08:03:02.9934078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9934480Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9934901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9935283Z return func(*args, **kwargs) 2025-09-07T08:03:02.9935652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9936144Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9936617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9937023Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9937198Z 2025-09-07T08:03:02.9937303Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9937514Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9937731Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9937940Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9938181Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9938542Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9938862Z return mod(**inputs) 2025-09-07T08:03:02.9939225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9939614Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9940008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9940368Z outputs = block( 2025-09-07T08:03:02.9940690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9941050Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9941433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9941807Z return func(*args, **kwargs) 2025-09-07T08:03:02.9942163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9942565Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9942963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9943349Z return func(*args, **kwargs) 2025-09-07T08:03:02.9943720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9944138Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9944581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9945178Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9945370Z 2025-09-07T08:03:02.9945489Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9945844Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9946172Z return mod(**inputs) 2025-09-07T08:03:02.9946537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9946992Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9947412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9947774Z outputs = block( 2025-09-07T08:03:02.9948095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9948457Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9948888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9949248Z return func(*args, **kwargs) 2025-09-07T08:03:02.9949610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9950001Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9950384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9950756Z return func(*args, **kwargs) 2025-09-07T08:03:02.9951117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9951512Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9951948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:02.9952396Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:02.9952563Z 2025-09-07T08:03:02.9952679Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9953042Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9953379Z return mod(**inputs) 2025-09-07T08:03:02.9953740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9954147Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9954533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9954910Z outputs = block( 2025-09-07T08:03:02.9955239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9955610Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9955999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9956370Z return func(*args, **kwargs) 2025-09-07T08:03:02.9956750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9957152Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9957545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9957929Z return func(*args, **kwargs) 2025-09-07T08:03:02.9958454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:02.9958862Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:02.9959234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9959667Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9959856Z 2025-09-07T08:03:02.9959971Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9960367Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9960725Z return mod(**inputs) 2025-09-07T08:03:02.9961118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9961584Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9961993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9962394Z outputs = block( 2025-09-07T08:03:02.9962742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9963135Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9963558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9963940Z return func(*args, **kwargs) 2025-09-07T08:03:02.9964312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:03:02.9964756Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:03:02.9965192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:03:02.9980575Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:03:02.9981040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9981527Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9981718Z 2025-09-07T08:03:02.9981833Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9982222Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9982558Z return mod(**inputs) 2025-09-07T08:03:02.9982931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9983326Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9983719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9984099Z outputs = block( 2025-09-07T08:03:02.9984425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9984787Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9985157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9985529Z return func(*args, **kwargs) 2025-09-07T08:03:02.9985899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9986296Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9986672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9987083Z return func(*args, **kwargs) 2025-09-07T08:03:02.9987445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:02.9987934Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:02.9988432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:02.9988825Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:02.9989007Z 2025-09-07T08:03:02.9989094Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9989321Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9989540Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9989735Z cudagraph partition due to non gpu ops 2025-09-07T08:03:02.9989973Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9990343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9990769Z return mod(**inputs) 2025-09-07T08:03:02.9991177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9991572Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9991965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9992339Z outputs = block( 2025-09-07T08:03:02.9992666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:02.9993076Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:02.9993465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9993842Z return func(*args, **kwargs) 2025-09-07T08:03:02.9994212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:02.9994613Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:02.9994993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:02.9995368Z return func(*args, **kwargs) 2025-09-07T08:03:02.9995736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:02.9996140Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:02.9996587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:02.9997060Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:02.9997251Z 2025-09-07T08:03:02.9997359Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:02.9997717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:02.9998049Z return mod(**inputs) 2025-09-07T08:03:02.9998404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:02.9998801Z transformer_outputs = self.transformer( 2025-09-07T08:03:02.9999187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:02.9999560Z outputs = block( 2025-09-07T08:03:02.9999892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0000258Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0000649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0001037Z return func(*args, **kwargs) 2025-09-07T08:03:03.0001435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0001877Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0002288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0002687Z return func(*args, **kwargs) 2025-09-07T08:03:03.0003092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:03.0003535Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:03.0004007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:03.0004504Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:03.0004691Z 2025-09-07T08:03:03.0004807Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0005207Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0005624Z return mod(**inputs) 2025-09-07T08:03:03.0006015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0006450Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0006966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0007387Z outputs = block( 2025-09-07T08:03:03.0007781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0008188Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0008603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0009013Z return func(*args, **kwargs) 2025-09-07T08:03:03.0009415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0009845Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0010273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0010680Z return func(*args, **kwargs) 2025-09-07T08:03:03.0011081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:03.0011522Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:03.0011909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0012354Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0012551Z 2025-09-07T08:03:03.0012668Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0013067Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0013432Z return mod(**inputs) 2025-09-07T08:03:03.0013820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0014263Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0014691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0015112Z outputs = block( 2025-09-07T08:03:03.0015458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0015862Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0016272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0016680Z return func(*args, **kwargs) 2025-09-07T08:03:03.0017080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0017512Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0017940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0018351Z return func(*args, **kwargs) 2025-09-07T08:03:03.0018737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:03.0019251Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:03.0019726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0020141Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0020324Z 2025-09-07T08:03:03.0020410Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0020688Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0020908Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0021130Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0021381Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0021763Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0022105Z return mod(**inputs) 2025-09-07T08:03:03.0022511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0022915Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0023327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0023730Z outputs = block( 2025-09-07T08:03:03.0024076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0024482Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0024867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0025245Z return func(*args, **kwargs) 2025-09-07T08:03:03.0025618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0026013Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0026411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0026786Z return func(*args, **kwargs) 2025-09-07T08:03:03.0027157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:03.0027560Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:03.0028014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:03.0028509Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:03.0028697Z 2025-09-07T08:03:03.0028816Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0029183Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0029507Z return mod(**inputs) 2025-09-07T08:03:03.0029878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0030284Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0030681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0031059Z outputs = block( 2025-09-07T08:03:03.0031381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0031751Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0032137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0032516Z return func(*args, **kwargs) 2025-09-07T08:03:03.0032885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0033286Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0033682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0034058Z return func(*args, **kwargs) 2025-09-07T08:03:03.0034429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:03.0034832Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:03.0035329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:03.0035798Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:03.0035971Z 2025-09-07T08:03:03.0036094Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0036486Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0036835Z return mod(**inputs) 2025-09-07T08:03:03.0037270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0037673Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0038087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0038488Z outputs = block( 2025-09-07T08:03:03.0038835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0039230Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0039640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0040050Z return func(*args, **kwargs) 2025-09-07T08:03:03.0040460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0040911Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0041327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0041742Z return func(*args, **kwargs) 2025-09-07T08:03:03.0042160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:03.0042602Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:03.0042990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0043430Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0043617Z 2025-09-07T08:03:03.0043737Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0044134Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0044508Z return mod(**inputs) 2025-09-07T08:03:03.0044918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0045524Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0045973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0046394Z outputs = block( 2025-09-07T08:03:03.0046753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0047238Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0047664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0048072Z return func(*args, **kwargs) 2025-09-07T08:03:03.0048480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:03:03.0048955Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:03:03.0049397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:03:03.0049820Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:03:03.0050180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0050642Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0050854Z 2025-09-07T08:03:03.0050967Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0051338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0051672Z return mod(**inputs) 2025-09-07T08:03:03.0052033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0052447Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0052894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0053263Z outputs = block( 2025-09-07T08:03:03.0053574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0053930Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0054305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0054673Z return func(*args, **kwargs) 2025-09-07T08:03:03.0055034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0055422Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0055818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0056196Z return func(*args, **kwargs) 2025-09-07T08:03:03.0056572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:03.0057076Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:03.0057538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0057951Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0058133Z 2025-09-07T08:03:03.0058219Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0058442Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0058653Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0058866Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0059108Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0059482Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0059836Z return mod(**inputs) 2025-09-07T08:03:03.0060218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0060642Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0061061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0061441Z outputs = block( 2025-09-07T08:03:03.0061760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0062135Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0062543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0062942Z return func(*args, **kwargs) 2025-09-07T08:03:03.0063339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0063760Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0064176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0064584Z return func(*args, **kwargs) 2025-09-07T08:03:03.0064957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:03.0065396Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:03.0065862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:03.0066381Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:03.0066580Z 2025-09-07T08:03:03.0066703Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0068196Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0068558Z return mod(**inputs) 2025-09-07T08:03:03.0068956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0069386Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0069807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0070214Z outputs = block( 2025-09-07T08:03:03.0070556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0070944Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0071353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0071751Z return func(*args, **kwargs) 2025-09-07T08:03:03.0072141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0072567Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0072983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0073388Z return func(*args, **kwargs) 2025-09-07T08:03:03.0073786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:03.0074221Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:03.0074702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:03.0075194Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:03.0075371Z 2025-09-07T08:03:03.0075498Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0075892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0076239Z return mod(**inputs) 2025-09-07T08:03:03.0076625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0077061Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0077483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0077887Z outputs = block( 2025-09-07T08:03:03.0078236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0078626Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0079036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0079452Z return func(*args, **kwargs) 2025-09-07T08:03:03.0079843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0080268Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0080681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0081102Z return func(*args, **kwargs) 2025-09-07T08:03:03.0081510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:03.0081931Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:03.0082317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0082749Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0082936Z 2025-09-07T08:03:03.0083091Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0083478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0083831Z return mod(**inputs) 2025-09-07T08:03:03.0084219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0084645Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0085067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0085458Z outputs = block( 2025-09-07T08:03:03.0085803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0086195Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0086600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0087158Z return func(*args, **kwargs) 2025-09-07T08:03:03.0087574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0088015Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0088451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0088866Z return func(*args, **kwargs) 2025-09-07T08:03:03.0089268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 294, in forward 2025-09-07T08:03:03.0089818Z query_states, key_states, value_states = self.c_attn(hidden_states).split(self.split_size, dim=2) 2025-09-07T08:03:03.0090333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0090781Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0090979Z 2025-09-07T08:03:03.0091080Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0091318Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0091556Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0091788Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0092054Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0092449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0092819Z return mod(**inputs) 2025-09-07T08:03:03.0093110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0093206Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0093480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0093563Z outputs = block( 2025-09-07T08:03:03.0093815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0093913Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0094182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0094263Z return func(*args, **kwargs) 2025-09-07T08:03:03.0094544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0094698Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0094966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0095040Z return func(*args, **kwargs) 2025-09-07T08:03:03.0095295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:03.0095433Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:03.0095731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:03:03.0095874Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:03:03.0095878Z 2025-09-07T08:03:03.0095988Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0096202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0096273Z return mod(**inputs) 2025-09-07T08:03:03.0096529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0096625Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0096877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0096956Z outputs = block( 2025-09-07T08:03:03.0097187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0097271Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0097524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0097595Z return func(*args, **kwargs) 2025-09-07T08:03:03.0097854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0097943Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0098192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0098264Z return func(*args, **kwargs) 2025-09-07T08:03:03.0098512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 336, in forward 2025-09-07T08:03:03.0098620Z attn_output, attn_weights = attention_interface( 2025-09-07T08:03:03.0098913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:03:03.0099032Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:03:03.0099036Z 2025-09-07T08:03:03.0099144Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0099350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0099426Z return mod(**inputs) 2025-09-07T08:03:03.0099681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0099776Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0100026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0100103Z outputs = block( 2025-09-07T08:03:03.0100330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0100411Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0100662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0100757Z return func(*args, **kwargs) 2025-09-07T08:03:03.0101035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 404, in forward 2025-09-07T08:03:03.0101131Z attn_output, self_attn_weights = self.attn( 2025-09-07T08:03:03.0101386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0101471Z return func(*args, **kwargs) 2025-09-07T08:03:03.0101770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 349, in forward 2025-09-07T08:03:03.0101865Z attn_output = self.c_proj(attn_output) 2025-09-07T08:03:03.0102082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0102205Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0102215Z 2025-09-07T08:03:03.0102322Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0102525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0102603Z return mod(**inputs) 2025-09-07T08:03:03.0102853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1480, in forward 2025-09-07T08:03:03.0102946Z transformer_outputs = self.transformer( 2025-09-07T08:03:03.0103192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward 2025-09-07T08:03:03.0103260Z outputs = block( 2025-09-07T08:03:03.0103489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:03.0103572Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:03.0103817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:03:03.0103890Z return func(*args, **kwargs) 2025-09-07T08:03:03.0104140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 440, in forward 2025-09-07T08:03:03.0104262Z feed_forward_hidden_states = self.mlp(hidden_states) 2025-09-07T08:03:03.0104500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 367, in forward 2025-09-07T08:03:03.0104594Z hidden_states = self.c_proj(hidden_states) 2025-09-07T08:03:03.0104811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 116, in forward 2025-09-07T08:03:03.0104938Z x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight) 2025-09-07T08:03:03.0104941Z 2025-09-07T08:03:03.0105025Z cudagraph partition due to non gpu ops 2025-09-07T08:03:03.0105131Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0105340Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0105411Z return mod(**inputs) 2025-09-07T08:03:03.0105671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-09-07T08:03:03.0105823Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:03:03.0105828Z 2025-09-07T08:03:03.0105934Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:03.0106145Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:03.0106215Z return mod(**inputs) 2025-09-07T08:03:03.0106479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1537, in forward 2025-09-07T08:03:03.0106624Z loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:03:03.0106627Z 2025-09-07T08:03:08.8846648Z Compilation time (from dynamo_timed): 48.006471452 2025-09-07T08:03:08.8847398Z pass 2025-09-07T08:03:08.8847774Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:03:08.8848823Z TIMING: _recursive_pre_grad_passes:0.08753 _recursive_joint_graph_passes:0.52119 _recursive_post_grad_passes:0.15036 linear_unary_template_precompiling:4.18835 linear_unary_template_autotuning:0.92712 async_compile.wait:0.85588 code_gen:12.86545 inductor_compile:35.94014 backend_compile:43.91745 gc:0.00147 entire_frame_compile:48.00647 total_wall_time:48.00647 2025-09-07T08:03:08.8850087Z STATS: call_* op count: 1142 | FakeTensorMode.__torch_dispatch__:45819 | FakeTensor.__torch_dispatch__:7301 | ProxyTorchDispatchMode.__torch_dispatch__:10247 2025-09-07T08:03:08.8850622Z Dynamo produced 2 graphs covering 1142 ops with 0 graph breaks (0 unique) 2025-09-07T08:03:11.9442930Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:03:11.9443867Z import pynvml # type: ignore[import] 2025-09-07T08:03:14.6102330Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:03:14.6103379Z from pkg_resources import resource_filename 2025-09-07T08:03:15.2553916Z 2025-09-07T08:03:16.1351076Z loading model: 0it [00:00, ?it/s]WARNING:common:Model GoogleFnet supports float32 only 2025-09-07T08:03:16.2752446Z 2025-09-07T08:03:16.2756699Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:03:16.2757022Z WARNING:common:Model GoogleFnet supports float32 only 2025-09-07T08:03:16.2764066Z cpu eval GoogleFnet 2025-09-07T08:03:16.6911986Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:03:16.6912572Z WARNING:common:Model GoogleFnet supports float32 only 2025-09-07T08:03:16.8570944Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:03:17.0246449Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:03:32.0990622Z Autotune Choices Stats: 2025-09-07T08:03:32.0991249Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_mkl_linear", "best_time": 0.31315499995798746} 2025-09-07T08:03:32.1001755Z AUTOTUNE packed_linear(512x768, 2900193x1, 768x768) 2025-09-07T08:03:32.1002155Z strides: [768, 1], [1, 0], [768, 1] 2025-09-07T08:03:32.1002490Z dtypes: torch.float32, torch.float32, torch.float32 2025-09-07T08:03:32.1002839Z _mkl_linear 0.3132 ms 100.0% 2025-09-07T08:03:32.1003140Z cpp_CppMicroGemmFP32Vec_0 0.5995 ms 52.2% 2025-09-07T08:03:32.1003749Z SingleProcess AUTOTUNE benchmarking takes 0.2532 seconds and 1.8634 seconds precompiling for 2 choices 2025-09-07T08:03:34.2845245Z Autotune Choices Stats: 2025-09-07T08:03:34.2845715Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_mkl_linear", "best_time": 1.1213259999749425} 2025-09-07T08:03:34.2861987Z AUTOTUNE packed_linear(512x768, 5259489x1, 3072x768) 2025-09-07T08:03:34.2862619Z strides: [768, 1], [1, 0], [768, 1] 2025-09-07T08:03:34.2863169Z dtypes: torch.float32, torch.float32, torch.float32 2025-09-07T08:03:34.2863690Z _mkl_linear 1.1213 ms 100.0% 2025-09-07T08:03:34.2864235Z cpp_CppMicroGemmFP32Vec_1 2.3104 ms 48.5% 2025-09-07T08:03:34.2865174Z SingleProcess AUTOTUNE benchmarking takes 0.2618 seconds and 1.8404 seconds precompiling for 2 choices 2025-09-07T08:03:36.4072524Z Autotune Choices Stats: 2025-09-07T08:03:36.4073199Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_mkl_linear", "best_time": 1.1570230003599136} 2025-09-07T08:03:36.4083556Z AUTOTUNE packed_linear(512x3072, 5259489x1, 768x3072) 2025-09-07T08:03:36.4084044Z strides: [3072, 1], [1, 0], [3072, 1] 2025-09-07T08:03:36.4084397Z dtypes: torch.float32, torch.float32, torch.float32 2025-09-07T08:03:36.4084773Z _mkl_linear 1.1570 ms 100.0% 2025-09-07T08:03:36.4085089Z cpp_CppMicroGemmFP32Vec_2 2.3742 ms 48.7% 2025-09-07T08:03:36.4085723Z SingleProcess AUTOTUNE benchmarking takes 0.2637 seconds and 1.7906 seconds precompiling for 2 choices 2025-09-07T08:03:40.0534746Z Autotune Choices Stats: 2025-09-07T08:03:40.0535864Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_mkl_linear", "best_time": 15.989682000054017} 2025-09-07T08:03:40.0546568Z AUTOTUNE packed_linear(512x768, 38879457x1, 32000x768) 2025-09-07T08:03:40.0547032Z strides: [768, 1], [1, 0], [768, 1] 2025-09-07T08:03:40.0547435Z dtypes: torch.float32, torch.float32, torch.float32 2025-09-07T08:03:40.0547862Z _mkl_linear 15.9897 ms 100.0% 2025-09-07T08:03:40.0548742Z cpp_CppMicroGemmFP32Vec_26 25.2118 ms 63.4% 2025-09-07T08:03:40.0549337Z SingleProcess AUTOTUNE benchmarking takes 0.3769 seconds and 1.6308 seconds precompiling for 2 choices 2025-09-07T08:03:40.4086888Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4087472Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4087858Z return mod(**inputs) 2025-09-07T08:03:40.4088331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4088806Z outputs = self.fnet( 2025-09-07T08:03:40.4089204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4089635Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4090058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4090542Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4090968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4091371Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4091809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4092258Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4092704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4093133Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4093593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4094048Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4094227Z 2025-09-07T08:03:40.4094355Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4094771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4095137Z return mod(**inputs) 2025-09-07T08:03:40.4095537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4095965Z outputs = self.fnet( 2025-09-07T08:03:40.4096328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4096716Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4097089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4097485Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4097855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4098505Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4098952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4099362Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4099778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4100200Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4100670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4101077Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4101239Z 2025-09-07T08:03:40.4101347Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4101717Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4102054Z return mod(**inputs) 2025-09-07T08:03:40.4102421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4102793Z outputs = self.fnet( 2025-09-07T08:03:40.4103161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4103535Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4103911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4104289Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4104662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4105034Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4105448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4105884Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4106283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4106698Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4107114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4107527Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4107692Z 2025-09-07T08:03:40.4107809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4108164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4108500Z return mod(**inputs) 2025-09-07T08:03:40.4108850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4109226Z outputs = self.fnet( 2025-09-07T08:03:40.4109586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4110005Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4110403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4110815Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4111196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4111563Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4111953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4112358Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4112757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4113193Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4113581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4113982Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4114145Z 2025-09-07T08:03:40.4114251Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4114643Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4114968Z return mod(**inputs) 2025-09-07T08:03:40.4115325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4115703Z outputs = self.fnet( 2025-09-07T08:03:40.4116081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4116464Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4116843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4117243Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4117618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4117985Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4118372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4118777Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4119179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4119571Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4119956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4120370Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4120534Z 2025-09-07T08:03:40.4120643Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4121024Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4121371Z return mod(**inputs) 2025-09-07T08:03:40.4121749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4122148Z outputs = self.fnet( 2025-09-07T08:03:40.4122522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4122935Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4123334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4123752Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4124147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4124533Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4124945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4125380Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4125818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4126235Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4126653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4127287Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4127503Z 2025-09-07T08:03:40.4127618Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4128022Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4128383Z return mod(**inputs) 2025-09-07T08:03:40.4128748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4129134Z outputs = self.fnet( 2025-09-07T08:03:40.4129553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4129945Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4130328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4130799Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4131190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4131595Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4132022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4132465Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4132905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4133359Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4133778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4134222Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4134389Z 2025-09-07T08:03:40.4134509Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4134892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4135233Z return mod(**inputs) 2025-09-07T08:03:40.4135611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4136021Z outputs = self.fnet( 2025-09-07T08:03:40.4136397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4136812Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4137215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4137633Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4138024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4138416Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4138822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4139251Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4139645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4140039Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4140433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4140833Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4140996Z 2025-09-07T08:03:40.4141103Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4141462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4141794Z return mod(**inputs) 2025-09-07T08:03:40.4142183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4142571Z outputs = self.fnet( 2025-09-07T08:03:40.4142923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4143376Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4143743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4144178Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4144545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4144905Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4145541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4145942Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4146347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4146734Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4147118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4147524Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4147681Z 2025-09-07T08:03:40.4147791Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4148150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4148474Z return mod(**inputs) 2025-09-07T08:03:40.4148826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4149408Z outputs = self.fnet( 2025-09-07T08:03:40.4149753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4150130Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4150511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4150904Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4151270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4151623Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4152004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4152401Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4152795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4153173Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4153553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4153953Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4154106Z 2025-09-07T08:03:40.4154217Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4154580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4154894Z return mod(**inputs) 2025-09-07T08:03:40.4155242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4155611Z outputs = self.fnet( 2025-09-07T08:03:40.4155959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4156403Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4156787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4157175Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4157541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4157902Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4158326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4158718Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4159098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4159473Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4159842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4160226Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4160383Z 2025-09-07T08:03:40.4160483Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4160887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4161209Z return mod(**inputs) 2025-09-07T08:03:40.4161558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4161948Z outputs = self.fnet( 2025-09-07T08:03:40.4162324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4162736Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4163139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4163556Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4163952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4164341Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4164750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4165188Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4165611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4166027Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4166436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4166933Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4167109Z 2025-09-07T08:03:40.4167214Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4167470Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4167861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4168222Z return mod(**inputs) 2025-09-07T08:03:40.4168583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4168957Z outputs = self.fnet( 2025-09-07T08:03:40.4169321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 512, in forward 2025-09-07T08:03:40.4169726Z embedding_output = self.embeddings( 2025-09-07T08:03:40.4170130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 142, in forward 2025-09-07T08:03:40.4170553Z embeddings = self.projection(embeddings) 2025-09-07T08:03:40.4170733Z 2025-09-07T08:03:40.4170835Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4171084Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4171463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4171813Z return mod(**inputs) 2025-09-07T08:03:40.4172186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4172589Z outputs = self.fnet( 2025-09-07T08:03:40.4173006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4173419Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4173824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4174244Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4174641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4175031Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4175439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4175867Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4176295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4176713Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4177123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4177568Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4177734Z 2025-09-07T08:03:40.4177846Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4178232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4178580Z return mod(**inputs) 2025-09-07T08:03:40.4178958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4179369Z outputs = self.fnet( 2025-09-07T08:03:40.4179737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4180147Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4180556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4180957Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4181324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4181690Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4182085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4182503Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4182899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4183272Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4183654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4184057Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4184210Z 2025-09-07T08:03:40.4184323Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4184678Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4185000Z return mod(**inputs) 2025-09-07T08:03:40.4185379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4185778Z outputs = self.fnet( 2025-09-07T08:03:40.4186135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4186533Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4186917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4187339Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4187705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4188064Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4188439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4188841Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4189240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4189629Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4190011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4190418Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4190583Z 2025-09-07T08:03:40.4190693Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4191059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4191389Z return mod(**inputs) 2025-09-07T08:03:40.4191741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4192125Z outputs = self.fnet( 2025-09-07T08:03:40.4192490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4192883Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4193281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4193680Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4194043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4194407Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4194787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4195185Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4195575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4195960Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4196340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4196742Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4196898Z 2025-09-07T08:03:40.4196981Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4197228Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4197599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4197935Z return mod(**inputs) 2025-09-07T08:03:40.4198313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4198679Z outputs = self.fnet( 2025-09-07T08:03:40.4199033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4199544Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4199928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4200334Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4200713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4201083Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4201512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4201920Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4202352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4202803Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4203247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4203748Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4204211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4204663Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4205080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4205573Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4205830Z 2025-09-07T08:03:40.4205929Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4206167Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4206426Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4206906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4207283Z return mod(**inputs) 2025-09-07T08:03:40.4207682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4208102Z outputs = self.fnet( 2025-09-07T08:03:40.4208534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4208923Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4209313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4209719Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4210088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4210460Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4210854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4211265Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4211669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4212064Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4212456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4212873Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4213032Z 2025-09-07T08:03:40.4213145Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4213505Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4213834Z return mod(**inputs) 2025-09-07T08:03:40.4214233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4214633Z outputs = self.fnet( 2025-09-07T08:03:40.4214993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4215375Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4215747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4216169Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4216534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4216888Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4217269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4217678Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4218089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4218483Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4218870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4219285Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4219462Z 2025-09-07T08:03:40.4219570Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4219926Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4220248Z return mod(**inputs) 2025-09-07T08:03:40.4220588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4220958Z outputs = self.fnet( 2025-09-07T08:03:40.4221311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4221696Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4222083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4222514Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4222906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4223295Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4223670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4224063Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4224457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4224840Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4225223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4225637Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4225795Z 2025-09-07T08:03:40.4225903Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4226277Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4226632Z return mod(**inputs) 2025-09-07T08:03:40.4227005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4227378Z outputs = self.fnet( 2025-09-07T08:03:40.4227756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4228194Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4228591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4228989Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4229359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4229713Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4230118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4230568Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4230963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4231340Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4231721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4232129Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4232286Z 2025-09-07T08:03:40.4232375Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4232607Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4232959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4233276Z return mod(**inputs) 2025-09-07T08:03:40.4233632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4234003Z outputs = self.fnet( 2025-09-07T08:03:40.4234360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4234747Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4235132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4235539Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4235912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4236269Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4236647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4237041Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4237445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4237833Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4238237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4238685Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4239102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4239515Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4239888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4240348Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4240598Z 2025-09-07T08:03:40.4240687Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4240927Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4241179Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4241566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4241917Z return mod(**inputs) 2025-09-07T08:03:40.4242326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4242747Z outputs = self.fnet( 2025-09-07T08:03:40.4243123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4243542Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4243951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4244414Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4244809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4245372Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4245799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4246248Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4246710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4247202Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4247634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4248091Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4248266Z 2025-09-07T08:03:40.4248394Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4248792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4249148Z return mod(**inputs) 2025-09-07T08:03:40.4249533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4249944Z outputs = self.fnet( 2025-09-07T08:03:40.4250327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4250741Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4251139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4251568Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4251970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4252362Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4252765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4253202Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4253633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4254055Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4254470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4254897Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4255075Z 2025-09-07T08:03:40.4255189Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4255577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4255933Z return mod(**inputs) 2025-09-07T08:03:40.4256313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4256709Z outputs = self.fnet( 2025-09-07T08:03:40.4257098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4257577Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4257988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4258386Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4258763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4259134Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4259582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4259996Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4260394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4260791Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4261177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4261592Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4261749Z 2025-09-07T08:03:40.4261861Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4262220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4262549Z return mod(**inputs) 2025-09-07T08:03:40.4262913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4263294Z outputs = self.fnet( 2025-09-07T08:03:40.4263640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4264029Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4264408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4264811Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4265182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4265538Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4265922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4266332Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4266737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4267129Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4267508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4267917Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4268078Z 2025-09-07T08:03:40.4268169Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4268417Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4268775Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4269105Z return mod(**inputs) 2025-09-07T08:03:40.4269465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4269871Z outputs = self.fnet( 2025-09-07T08:03:40.4270253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4270644Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4271029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4271433Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4271830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4272210Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4272597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4272995Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4273407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4273838Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4274252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4274713Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4275137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4275563Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4275961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4276436Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4276688Z 2025-09-07T08:03:40.4276777Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4277010Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4277267Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4277624Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4277956Z return mod(**inputs) 2025-09-07T08:03:40.4278318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4278701Z outputs = self.fnet( 2025-09-07T08:03:40.4279063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4279444Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4279825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4280229Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4280603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4280972Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4281357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4281795Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4282233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4282659Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4283061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4283503Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4283676Z 2025-09-07T08:03:40.4283788Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4284175Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4284523Z return mod(**inputs) 2025-09-07T08:03:40.4284900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4285312Z outputs = self.fnet( 2025-09-07T08:03:40.4285694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4286156Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4286560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4287079Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4287489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4287901Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4288375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4288813Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4289244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4289666Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4290086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4290540Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4290709Z 2025-09-07T08:03:40.4290822Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4291216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4291563Z return mod(**inputs) 2025-09-07T08:03:40.4291944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4292354Z outputs = self.fnet( 2025-09-07T08:03:40.4292723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4293133Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4293534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4293967Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4294354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4294748Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4295164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4295652Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4296056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4296442Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4296831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4297242Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4297402Z 2025-09-07T08:03:40.4297517Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4297880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4298201Z return mod(**inputs) 2025-09-07T08:03:40.4298560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4298942Z outputs = self.fnet( 2025-09-07T08:03:40.4299303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4299686Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4300063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4300459Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4300831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4301242Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4301624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4302035Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4302445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4302957Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4303339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4303753Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4303918Z 2025-09-07T08:03:40.4304003Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4304250Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4304619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4304945Z return mod(**inputs) 2025-09-07T08:03:40.4305305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4305687Z outputs = self.fnet( 2025-09-07T08:03:40.4306042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4306432Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4306804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4307202Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4307575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4307941Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4308326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4308725Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4309132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4309533Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4309952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4310398Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4310819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4311240Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4311625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4312081Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4312312Z 2025-09-07T08:03:40.4312397Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4312618Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4312863Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4313230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4313558Z return mod(**inputs) 2025-09-07T08:03:40.4313920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4314299Z outputs = self.fnet( 2025-09-07T08:03:40.4314659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4315094Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4315458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4315847Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4316220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4316580Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4316990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4317399Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4317798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4318190Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4318571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4318973Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4319140Z 2025-09-07T08:03:40.4319246Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4319612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4319946Z return mod(**inputs) 2025-09-07T08:03:40.4320310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4320685Z outputs = self.fnet( 2025-09-07T08:03:40.4321067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4321477Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4321857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4322254Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4322629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4323002Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4323393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4323805Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4324230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4324649Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4325062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4325519Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4325692Z 2025-09-07T08:03:40.4325812Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4326193Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4326550Z return mod(**inputs) 2025-09-07T08:03:40.4327013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4327456Z outputs = self.fnet( 2025-09-07T08:03:40.4327852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4328286Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4328693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4329108Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4329523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4329972Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4330399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4330858Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4331301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4331773Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4332191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4332654Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4332838Z 2025-09-07T08:03:40.4332956Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4333357Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4333713Z return mod(**inputs) 2025-09-07T08:03:40.4334104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4334519Z outputs = self.fnet( 2025-09-07T08:03:40.4334914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4335331Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4335737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4336169Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4336571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4336970Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4337390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4337830Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4338269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4338694Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4339093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4339511Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4339673Z 2025-09-07T08:03:40.4339756Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4339997Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4340352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4340674Z return mod(**inputs) 2025-09-07T08:03:40.4341025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4341402Z outputs = self.fnet( 2025-09-07T08:03:40.4341755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4342142Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4342512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4342908Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4343285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4343645Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4344020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4344442Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4344845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4345395Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4345824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4346298Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4346782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4347194Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4347570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4348016Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4348250Z 2025-09-07T08:03:40.4348337Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4348546Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4348788Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4349149Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4349474Z return mod(**inputs) 2025-09-07T08:03:40.4349823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4350198Z outputs = self.fnet( 2025-09-07T08:03:40.4350546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4350923Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4351297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4351691Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4352062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4352426Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4352816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4353206Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4353604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4353982Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4354360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4354761Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4354920Z 2025-09-07T08:03:40.4355027Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4355391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4355721Z return mod(**inputs) 2025-09-07T08:03:40.4356079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4356458Z outputs = self.fnet( 2025-09-07T08:03:40.4356808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4357198Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4357568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4357956Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4358314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4358730Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4359111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4359512Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4359910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4360343Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4360738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4361151Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4361309Z 2025-09-07T08:03:40.4361423Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4361788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4362113Z return mod(**inputs) 2025-09-07T08:03:40.4362471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4362864Z outputs = self.fnet( 2025-09-07T08:03:40.4363240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4363636Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4364040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4364459Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4364853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4365240Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4365645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4366078Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4366502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4366978Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4367392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4367822Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4367997Z 2025-09-07T08:03:40.4368110Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4368493Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4368829Z return mod(**inputs) 2025-09-07T08:03:40.4369168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4369536Z outputs = self.fnet( 2025-09-07T08:03:40.4369898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4370306Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4370707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4371123Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4371516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4371904Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4372311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4372739Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4373208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4373623Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4374035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4374471Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4374636Z 2025-09-07T08:03:40.4374725Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4375020Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4375413Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4375762Z return mod(**inputs) 2025-09-07T08:03:40.4376149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4376548Z outputs = self.fnet( 2025-09-07T08:03:40.4376928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4377336Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4377738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4378150Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4378542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4378898Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4379274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4379663Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4380054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4380465Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4380881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4381339Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4381763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4382188Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4382565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4383011Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4383239Z 2025-09-07T08:03:40.4383332Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4383554Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4383784Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4384143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4384470Z return mod(**inputs) 2025-09-07T08:03:40.4384824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4385190Z outputs = self.fnet( 2025-09-07T08:03:40.4385544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4385923Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4386294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4386683Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4387041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4387438Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4387815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4388213Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4388603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4389019Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4389399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4389803Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4389958Z 2025-09-07T08:03:40.4390070Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4390419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4390743Z return mod(**inputs) 2025-09-07T08:03:40.4391092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4391462Z outputs = self.fnet( 2025-09-07T08:03:40.4391809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4392180Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4392553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4392943Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4393310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4393662Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4394045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4394446Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4394843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4395229Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4395604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4396009Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4396171Z 2025-09-07T08:03:40.4396276Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4396633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4396954Z return mod(**inputs) 2025-09-07T08:03:40.4397306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4397668Z outputs = self.fnet( 2025-09-07T08:03:40.4398010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4398375Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4398733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4399117Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4399471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4399829Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4400207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4400624Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4401048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4401504Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4401917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4402364Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4402532Z 2025-09-07T08:03:40.4402679Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4403064Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4403411Z return mod(**inputs) 2025-09-07T08:03:40.4403789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4404193Z outputs = self.fnet( 2025-09-07T08:03:40.4404582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4404997Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4405402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4405829Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4406215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4406604Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4407112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4407572Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4408003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4408428Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4408809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4409214Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4409370Z 2025-09-07T08:03:40.4409458Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4409693Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4410054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4410376Z return mod(**inputs) 2025-09-07T08:03:40.4410728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4411099Z outputs = self.fnet( 2025-09-07T08:03:40.4411442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4411827Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4412211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4412613Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4412977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4413346Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4413744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4414135Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4414555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4414939Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4415373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4415855Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4416270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4416688Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4417100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4417552Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4417790Z 2025-09-07T08:03:40.4417873Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4418090Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4418329Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4418684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4419012Z return mod(**inputs) 2025-09-07T08:03:40.4419377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4419767Z outputs = self.fnet( 2025-09-07T08:03:40.4420109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4420488Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4420864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4421257Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4421622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4421973Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4422353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4422754Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4423153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4423529Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4423912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4424316Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4424470Z 2025-09-07T08:03:40.4424583Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4424940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4425257Z return mod(**inputs) 2025-09-07T08:03:40.4425608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4425983Z outputs = self.fnet( 2025-09-07T08:03:40.4426329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4426705Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4427069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4427462Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4427828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4428185Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4428555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4428994Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4429415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4429800Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4430178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4430569Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4430731Z 2025-09-07T08:03:40.4430870Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4431227Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4431550Z return mod(**inputs) 2025-09-07T08:03:40.4431897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4432257Z outputs = self.fnet( 2025-09-07T08:03:40.4432604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4432979Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4433346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4433725Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4434086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4434447Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4434826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4435224Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4435620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4436013Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4436400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4436809Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4436965Z 2025-09-07T08:03:40.4437078Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4437435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4437779Z return mod(**inputs) 2025-09-07T08:03:40.4438129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4438494Z outputs = self.fnet( 2025-09-07T08:03:40.4438830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4439203Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4439573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4439956Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4440315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4440664Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4441045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4441440Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4441833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4442214Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4442588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4443029Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4443182Z 2025-09-07T08:03:40.4443273Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4443517Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4443873Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4444204Z return mod(**inputs) 2025-09-07T08:03:40.4444599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4445191Z outputs = self.fnet( 2025-09-07T08:03:40.4445586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4445990Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4446392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4446871Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4447295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4447685Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4448101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4448503Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4448922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4449305Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4449692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4450125Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4450534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4450938Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4451311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4451751Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4451990Z 2025-09-07T08:03:40.4452072Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4452286Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4452523Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4452884Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4453198Z return mod(**inputs) 2025-09-07T08:03:40.4453546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4453915Z outputs = self.fnet( 2025-09-07T08:03:40.4454263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4454634Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4455006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4455403Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4455768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4456130Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4456499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4456944Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4457369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4457754Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4458129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4458538Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4458701Z 2025-09-07T08:03:40.4458854Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4459212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4459536Z return mod(**inputs) 2025-09-07T08:03:40.4459877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4460258Z outputs = self.fnet( 2025-09-07T08:03:40.4460631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4461010Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4461376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4461766Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4462138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4462512Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4462903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4463310Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4463717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4464114Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4464506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4464922Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4465079Z 2025-09-07T08:03:40.4465187Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4465550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4465885Z return mod(**inputs) 2025-09-07T08:03:40.4466242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4466620Z outputs = self.fnet( 2025-09-07T08:03:40.4466994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4467418Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4467802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4468202Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4468569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4468957Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4469371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4469804Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4470232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4470649Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4471061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4471563Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4471730Z 2025-09-07T08:03:40.4471849Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4472236Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4472577Z return mod(**inputs) 2025-09-07T08:03:40.4472992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4473393Z outputs = self.fnet( 2025-09-07T08:03:40.4473772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4474184Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4474583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4475005Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4475396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4475782Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4476185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4476616Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4477042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4477451Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4477854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4478299Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4478473Z 2025-09-07T08:03:40.4478563Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4478823Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4479209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4479549Z return mod(**inputs) 2025-09-07T08:03:40.4479930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4480333Z outputs = self.fnet( 2025-09-07T08:03:40.4480598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4480687Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4480947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4481052Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4481289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4481378Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4481654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4481748Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4482033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4482121Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4482417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4482551Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4482822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4482988Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4483217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4483418Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4483422Z 2025-09-07T08:03:40.4483509Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4483593Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4483749Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4483964Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4484044Z return mod(**inputs) 2025-09-07T08:03:40.4484317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4484391Z outputs = self.fnet( 2025-09-07T08:03:40.4484674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4484758Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4485028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4485124Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4485371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4485458Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4485729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4485843Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4486115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4486213Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4486475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4486586Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4486590Z 2025-09-07T08:03:40.4486710Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4487074Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4487172Z return mod(**inputs) 2025-09-07T08:03:40.4487434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4487507Z outputs = self.fnet( 2025-09-07T08:03:40.4487786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4487871Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4488146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4488233Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4488465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4488546Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4488794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4488903Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4489150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4489239Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4489485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4489644Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4489654Z 2025-09-07T08:03:40.4489761Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4489959Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4490036Z return mod(**inputs) 2025-09-07T08:03:40.4490320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4490395Z outputs = self.fnet( 2025-09-07T08:03:40.4490637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4490712Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4490963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4491052Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4491286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4491369Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4491612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4491718Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4491963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4492052Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4492296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4492400Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4492414Z 2025-09-07T08:03:40.4492520Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4492719Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4492795Z return mod(**inputs) 2025-09-07T08:03:40.4493039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4493113Z outputs = self.fnet( 2025-09-07T08:03:40.4493361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4493438Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4493692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4493780Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4494009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4494093Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4494339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4494445Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4494690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4494780Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4495027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4495135Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4495139Z 2025-09-07T08:03:40.4495223Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4495330Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4495573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4495640Z return mod(**inputs) 2025-09-07T08:03:40.4495902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4495969Z outputs = self.fnet( 2025-09-07T08:03:40.4496207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4496289Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4496567Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4496663Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4496879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4496957Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4497208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4497294Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4497557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4497637Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4497922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4498039Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4498282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4498400Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4498611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4498801Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4498805Z 2025-09-07T08:03:40.4498888Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4498969Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4499083Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4499287Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4499364Z return mod(**inputs) 2025-09-07T08:03:40.4499610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4499686Z outputs = self.fnet( 2025-09-07T08:03:40.4499928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4500007Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4500264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4500355Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4500591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4500676Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4500928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4501039Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4501290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4501383Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4501631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4501822Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4501833Z 2025-09-07T08:03:40.4501939Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4502149Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4502221Z return mod(**inputs) 2025-09-07T08:03:40.4502497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4502575Z outputs = self.fnet( 2025-09-07T08:03:40.4502811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4502885Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4503131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4503220Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4503440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4503518Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4503754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4503858Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4504099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4504187Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4504426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4504527Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4504541Z 2025-09-07T08:03:40.4504644Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4504838Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4504912Z return mod(**inputs) 2025-09-07T08:03:40.4505150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4505223Z outputs = self.fnet( 2025-09-07T08:03:40.4505462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4505536Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4505780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4505863Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4506083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4506166Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4506400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4506504Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4506740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4506828Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4507067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4507170Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4507174Z 2025-09-07T08:03:40.4507275Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4507468Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4507581Z return mod(**inputs) 2025-09-07T08:03:40.4507821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4507895Z outputs = self.fnet( 2025-09-07T08:03:40.4508133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4508206Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4508491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4508579Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4508801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4508879Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4509118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4509222Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4509459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4509546Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4509784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4509895Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4509899Z 2025-09-07T08:03:40.4509979Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4510081Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4510284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4510350Z return mod(**inputs) 2025-09-07T08:03:40.4510598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4510664Z outputs = self.fnet( 2025-09-07T08:03:40.4510900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4510982Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4511219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4511315Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4511528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4511606Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4511850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4511938Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4512201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4512281Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4512557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4512669Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4512911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4513028Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4513235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4513416Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4513452Z 2025-09-07T08:03:40.4513534Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4513618Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4513722Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4513917Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4513989Z return mod(**inputs) 2025-09-07T08:03:40.4514263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4514339Z outputs = self.fnet( 2025-09-07T08:03:40.4514576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4514650Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4514896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4514982Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4515206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4515284Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4515522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4515626Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4515871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4515957Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4516188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4516293Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4516299Z 2025-09-07T08:03:40.4516402Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4516596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4516669Z return mod(**inputs) 2025-09-07T08:03:40.4516907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4516978Z outputs = self.fnet( 2025-09-07T08:03:40.4517218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4517292Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4517537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4517620Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4517842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4517922Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4518157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4518260Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4518497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4518582Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4518821Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4518926Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4518929Z 2025-09-07T08:03:40.4519031Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4519225Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4519329Z return mod(**inputs) 2025-09-07T08:03:40.4519569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4519641Z outputs = self.fnet( 2025-09-07T08:03:40.4519879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4519951Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4520240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4520326Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4520549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4520627Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4520867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4520970Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4521207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4521299Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4521544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4521655Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4521658Z 2025-09-07T08:03:40.4521763Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4521964Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4522038Z return mod(**inputs) 2025-09-07T08:03:40.4522284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4522362Z outputs = self.fnet( 2025-09-07T08:03:40.4522661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4522741Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4523017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4523109Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4523360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4523441Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4523698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 249, in forward 2025-09-07T08:03:40.4523797Z self_fourier_outputs = self.fourier(hidden_states) 2025-09-07T08:03:40.4524045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 202, in forward 2025-09-07T08:03:40.4524135Z self_outputs = self.self(hidden_states) 2025-09-07T08:03:40.4524384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 181, in forward 2025-09-07T08:03:40.4524493Z outputs = self.fourier_transform(hidden_states).real 2025-09-07T08:03:40.4524496Z 2025-09-07T08:03:40.4524577Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4524685Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4524895Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4524962Z return mod(**inputs) 2025-09-07T08:03:40.4525228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 671, in forward 2025-09-07T08:03:40.4525322Z outputs = self.fnet( 2025-09-07T08:03:40.4525632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 518, in forward 2025-09-07T08:03:40.4525718Z encoder_outputs = self.encoder( 2025-09-07T08:03:40.4525984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 280, in forward 2025-09-07T08:03:40.4526083Z layer_outputs = layer_module(hidden_states) 2025-09-07T08:03:40.4526354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:03:40.4526450Z return super().__call__(*args, **kwargs) 2025-09-07T08:03:40.4526721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 252, in forward 2025-09-07T08:03:40.4526879Z layer_output = apply_chunking_to_forward( 2025-09-07T08:03:40.4527176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/pytorch_utils.py", line 251, in apply_chunking_to_forward 2025-09-07T08:03:40.4527267Z return forward_fn(*input_tensors) 2025-09-07T08:03:40.4527573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 261, in feed_forward_chunk 2025-09-07T08:03:40.4527713Z intermediate_output = self.intermediate(fourier_output) 2025-09-07T08:03:40.4527976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 220, in forward 2025-09-07T08:03:40.4528106Z hidden_states = self.intermediate_act_fn(hidden_states) 2025-09-07T08:03:40.4528341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/activations.py", line 47, in forward 2025-09-07T08:03:40.4528526Z return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0)))) 2025-09-07T08:03:40.4528530Z 2025-09-07T08:03:40.4528614Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4528705Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4528785Z cudagraph partition due to non gpu ops 2025-09-07T08:03:40.4528889Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:03:40.4529094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:03:40.4529161Z return mod(**inputs) 2025-09-07T08:03:40.4529412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/fnet/modeling_fnet.py", line 686, in forward 2025-09-07T08:03:40.4529599Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:03:40.4529602Z 2025-09-07T08:03:43.9274580Z Compilation time (from dynamo_timed): 25.63378864 2025-09-07T08:03:43.9337104Z pass 2025-09-07T08:03:43.9341290Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:03:43.9345339Z TIMING: _recursive_pre_grad_passes:0.02464 _recursive_joint_graph_passes:0.47273 _recursive_post_grad_passes:0.06537 packed_linear_template_precompiling:7.12922 packed_linear_template_autotuning:1.1478 async_compile.wait:0.71824 code_gen:3.13905 inductor_compile:21.57294 backend_compile:23.91691 gc:0.00016 entire_frame_compile:25.63379 total_wall_time:25.63379 2025-09-07T08:03:43.9346680Z STATS: call_* op count: 232 | FakeTensorMode.__torch_dispatch__:14358 | FakeTensor.__torch_dispatch__:2950 | ProxyTorchDispatchMode.__torch_dispatch__:2923 2025-09-07T08:03:43.9347173Z Dynamo produced 1 graphs covering 232 ops with 0 graph breaks (0 unique) 2025-09-07T08:03:46.5477034Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:03:46.5477951Z import pynvml # type: ignore[import] 2025-09-07T08:03:49.1981603Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:03:49.1983025Z from pkg_resources import resource_filename 2025-09-07T08:03:49.8467738Z 2025-09-07T08:03:51.0721236Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:03:51.0721905Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:03:51.0723134Z cpu eval LayoutLMForMaskedLM 2025-09-07T08:03:51.7021841Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:03:51.8606565Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:03:52.0247466Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:04:16.3532514Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3535871Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3536184Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3536444Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3536660Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3536874Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3537082Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3537283Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3537493Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3537710Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3537924Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3538174Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3538394Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3538715Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3538931Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3540732Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3540949Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3541240Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3544685Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3544902Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3545362Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3551533Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3556156Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3561794Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3565179Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3565504Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3565807Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3566043Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3566283Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3566519Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3566771Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3567217Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3567458Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3567694Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3567950Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3568181Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3568410Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3568640Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3568876Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3569101Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3569358Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3569579Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3569794Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3570030Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3570260Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3570484Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3571128Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3571369Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3571586Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3571792Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3572010Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3572226Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3572442Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3572647Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3572958Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3573167Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3573370Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3573568Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3573774Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3573981Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3574190Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3574397Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3574622Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3574828Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3575036Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3575233Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3575438Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3575648Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3575858Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3576062Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3576271Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3576481Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3576690Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3576898Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3577101Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3577308Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3577522Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3577732Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3577933Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3578139Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3578357Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3578561Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3578755Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3578959Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3579163Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3579367Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3579564Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3579768Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3579978Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3580181Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3580381Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3580594Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3580803Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3581014Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3581218Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3581427Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3581638Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3581847Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3582054Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3582259Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3582463Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3582671Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3582861Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3583059Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3583260Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3583463Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3583720Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3583924Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3584127Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3584332Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3584529Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3584734Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3584938Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3585179Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3585433Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3585632Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3585841Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3586037Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3586235Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3586425Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3586625Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3586828Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3587026Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3587216Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3587417Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3587615Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3587814Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3588005Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3588205Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3588409Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3588608Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3588801Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3589005Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3589208Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3589412Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3589608Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3589819Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3590020Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3590223Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3590418Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3590622Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3590824Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3591027Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3591229Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3591428Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3591641Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3591842Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3592040Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3592237Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3592442Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3592650Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3592855Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3593053Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3593258Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3593464Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3593668Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3593867Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3594077Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3594283Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3594493Z cudagraph partition due to non gpu ops 2025-09-07T08:04:16.3594733Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:16.3595125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:16.3595480Z return mod(**inputs) 2025-09-07T08:04:16.3595874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:04:16.3596303Z return func(*args, **kwargs) 2025-09-07T08:04:16.3596674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:04:16.3597046Z return func(*args, **kwargs) 2025-09-07T08:04:16.3597390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:16.3597750Z output = func(self, *args, **kwargs) 2025-09-07T08:04:16.3598190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 776, in forward 2025-09-07T08:04:16.3598595Z masked_lm_loss = loss_fct( 2025-09-07T08:04:16.3598727Z 2025-09-07T08:04:19.9862524Z Compilation time (from dynamo_timed): 26.636171342 2025-09-07T08:04:19.9893590Z pass 2025-09-07T08:04:19.9894036Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:04:19.9902262Z TIMING: _recursive_pre_grad_passes:0.04207 _recursive_joint_graph_passes:0.4971 _recursive_post_grad_passes:0.07503 linear_unary_template_precompiling:0.01334 async_compile.wait:0.61062 code_gen:2.93339 inductor_compile:17.37872 backend_compile:23.44292 gc:0.00061 entire_frame_compile:26.63617 total_wall_time:26.63617 2025-09-07T08:04:19.9903453Z STATS: call_* op count: 434 | FakeTensorMode.__torch_dispatch__:30262 | FakeTensor.__torch_dispatch__:3051 | ProxyTorchDispatchMode.__torch_dispatch__:8633 2025-09-07T08:04:19.9903965Z Dynamo produced 1 graphs covering 434 ops with 0 graph breaks (0 unique) 2025-09-07T08:04:22.6992868Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:04:22.6994363Z import pynvml # type: ignore[import] 2025-09-07T08:04:25.3716131Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:04:25.3722907Z from pkg_resources import resource_filename 2025-09-07T08:04:26.0684264Z 2025-09-07T08:04:27.1005429Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:04:27.1005876Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:04:27.1006196Z cpu eval LayoutLMForSequenceClassification 2025-09-07T08:04:27.6001978Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:04:27.7452905Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:04:27.8889687Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:04:53.2386098Z Autotune Choices Stats: 2025-09-07T08:04:53.2387096Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_72", "best_time": 0.006075999863242032} 2025-09-07T08:04:53.2397367Z AUTOTUNE linear_unary(1x768, 768x768, 768) 2025-09-07T08:04:53.2397619Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:04:53.2397887Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:04:53.2398180Z cpp_CppMicroGemmAMX_72 0.0061 ms 100.0% 2025-09-07T08:04:53.2398478Z _linear_pointwise 0.0404 ms 15.0% 2025-09-07T08:04:53.2398870Z SingleProcess AUTOTUNE benchmarking takes 0.2531 seconds and 1.4834 seconds precompiling for 2 choices 2025-09-07T08:04:54.9411650Z Autotune Choices Stats: 2025-09-07T08:04:54.9412167Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_73", "best_time": 0.004179999905318255} 2025-09-07T08:04:54.9421261Z AUTOTUNE linear_unary(1x768, 2x768, 2) 2025-09-07T08:04:54.9421559Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:04:54.9422162Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:04:54.9422515Z cpp_CppMicroGemmAMX_73 0.0042 ms 100.0% 2025-09-07T08:04:54.9422758Z _linear_pointwise 0.0307 ms 13.6% 2025-09-07T08:04:54.9423141Z SingleProcess AUTOTUNE benchmarking takes 0.2507 seconds and 1.3466 seconds precompiling for 2 choices 2025-09-07T08:04:55.3334115Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:55.3334637Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:55.3335319Z return mod(**inputs) 2025-09-07T08:04:55.3337717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3338252Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3338788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 891, in forward 2025-09-07T08:04:55.3339279Z logits = self.classifier(pooled_output) 2025-09-07T08:04:55.3339471Z 2025-09-07T08:04:55.3339572Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3339823Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3340063Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3340289Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3340523Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3340757Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3340988Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3341215Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3341453Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3341683Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3342059Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3342303Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3342533Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3342763Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3342996Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3343223Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3343460Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3343689Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3343922Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3344147Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3344383Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3344612Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3344846Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3345240Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3345476Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3345707Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3345938Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3346170Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3346394Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3346623Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3346857Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3347091Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3347315Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3347545Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3347791Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3348020Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3348244Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3348472Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3348700Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3348927Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3349147Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3349378Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3349605Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3349832Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3350054Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3350399Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3350627Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3350856Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3351088Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3351310Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3351540Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3351768Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3351997Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3352287Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3352520Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3352747Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3352972Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3353191Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3353416Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3353643Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3353878Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3354103Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3354333Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3354562Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3354791Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3355042Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3355268Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3355489Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3355724Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3355950Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3356176Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3356397Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3356624Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3356851Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3357082Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3357332Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3357555Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3357781Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3358010Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3358237Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3358458Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3358687Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3358913Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3359142Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3359363Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3359589Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3359814Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3360040Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3360259Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3360485Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3360714Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3360943Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3361163Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3361391Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3361622Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3361851Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3362076Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3362296Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3362526Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3362755Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3362985Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3363209Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3363434Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3363664Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3363893Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3364173Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3364398Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3364625Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3364851Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3365071Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3365297Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3365524Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3365757Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3366020Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3366259Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3366495Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3366731Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3367085Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3367310Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3367541Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3367777Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3368006Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3368228Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3368458Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3368692Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3368917Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3369143Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3369369Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3369599Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3369829Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3370048Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3370280Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3370507Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3370732Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3370952Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3371320Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3371563Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3371793Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3372013Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3372246Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3372483Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3372706Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3372924Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3373155Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3373379Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3373603Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3373817Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3374039Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3374262Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3374487Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3374709Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3374956Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3375180Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3375405Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3375628Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3375842Z cudagraph partition due to non gpu ops 2025-09-07T08:04:55.3376112Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:55.3376527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:55.3376897Z return mod(**inputs) 2025-09-07T08:04:55.3377270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3377662Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3378110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-09-07T08:04:55.3378615Z outputs = self.layoutlm( 2025-09-07T08:04:55.3379018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:04:55.3379438Z return func(*args, **kwargs) 2025-09-07T08:04:55.3379831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:04:55.3380246Z return func(*args, **kwargs) 2025-09-07T08:04:55.3380656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3381051Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3381500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-09-07T08:04:55.3381978Z pooled_output = self.pooler(sequence_output) 2025-09-07T08:04:55.3382452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 430, in forward 2025-09-07T08:04:55.3382931Z pooled_output = self.dense(first_token_tensor) 2025-09-07T08:04:55.3383126Z 2025-09-07T08:04:55.3383247Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:55.3383623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:55.3383986Z return mod(**inputs) 2025-09-07T08:04:55.3384352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3384747Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3385191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-09-07T08:04:55.3385651Z outputs = self.layoutlm( 2025-09-07T08:04:55.3386027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:04:55.3386415Z return func(*args, **kwargs) 2025-09-07T08:04:55.3386788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:04:55.3387203Z return func(*args, **kwargs) 2025-09-07T08:04:55.3387625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3388021Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3388466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-09-07T08:04:55.3388929Z pooled_output = self.pooler(sequence_output) 2025-09-07T08:04:55.3389393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 431, in forward 2025-09-07T08:04:55.3389828Z pooled_output = self.activation(pooled_output) 2025-09-07T08:04:55.3389994Z 2025-09-07T08:04:55.3390123Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:55.3390521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:55.3390883Z return mod(**inputs) 2025-09-07T08:04:55.3391243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3391633Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3392070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 891, in forward 2025-09-07T08:04:55.3392492Z logits = self.classifier(pooled_output) 2025-09-07T08:04:55.3392635Z 2025-09-07T08:04:55.3392743Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:55.3393113Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:55.3393447Z return mod(**inputs) 2025-09-07T08:04:55.3393787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3394209Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3394660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-09-07T08:04:55.3395175Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:04:55.3395378Z 2025-09-07T08:04:55.3395503Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:55.3395940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:55.3396289Z return mod(**inputs) 2025-09-07T08:04:55.3396630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3396991Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3397453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-09-07T08:04:55.3397954Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:04:55.3398157Z 2025-09-07T08:04:55.3398272Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:04:55.3398679Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:04:55.3399042Z return mod(**inputs) 2025-09-07T08:04:55.3399402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:04:55.3399757Z output = func(self, *args, **kwargs) 2025-09-07T08:04:55.3400167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-09-07T08:04:55.3400657Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:04:55.3400847Z 2025-09-07T08:05:14.3007384Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:05:14.3007970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:05:14.3008353Z return mod(**inputs) 2025-09-07T08:05:14.3008726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3009099Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3009510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 891, in forward 2025-09-07T08:05:14.3009951Z logits = self.classifier(pooled_output) 2025-09-07T08:05:14.3010108Z 2025-09-07T08:05:14.3010197Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3010420Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3010633Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3010837Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3011046Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3011260Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3011470Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3011670Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3011887Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3012111Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3012334Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3012551Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3012815Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3013043Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3013270Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3013490Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3013715Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3013940Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3014149Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3014349Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3014943Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3015156Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3015366Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3015568Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3015778Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3015989Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3016198Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3016412Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3016728Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3016943Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3017157Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3017361Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3017572Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3017782Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3018014Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3018225Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3018424Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3018632Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3018848Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3019059Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3019264Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3019476Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3019699Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3019922Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3020143Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3020351Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3020562Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3020776Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3020980Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3021191Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3021403Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3021615Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3021818Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3022040Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3022253Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3022517Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3022732Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3022942Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3023152Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3023395Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3023619Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3023857Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3024096Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3024329Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3024551Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3024780Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3025004Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3025226Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3025442Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3025676Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3025899Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3026119Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3026334Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3026560Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3026783Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3027003Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3027224Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3027431Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3027641Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3027852Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3028100Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3028359Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3028580Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3028800Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3029019Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3029235Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3029455Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3029678Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3029901Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3030158Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3030382Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3030603Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3030824Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3031038Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3031261Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3031487Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3031715Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3031933Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3032190Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3032413Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3032636Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3032856Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3033073Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3033296Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3033519Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3033745Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3033959Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3034178Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3034399Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3034621Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3034838Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3035061Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3035281Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3035500Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3035717Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3035943Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3036166Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3036386Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3036608Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3036835Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3037055Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3037276Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3037490Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3037710Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3037933Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3038171Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3038387Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3038608Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3038827Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3039049Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3039264Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3039486Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3039708Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3039933Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3040149Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3040370Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3040593Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3040817Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3041039Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3041256Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3041480Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3041745Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3041966Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3042198Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3042419Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3042641Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3042861Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3043073Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3043300Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3043554Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3043777Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3043988Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3044208Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3044428Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3044647Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3044860Z cudagraph partition due to non gpu ops 2025-09-07T08:05:14.3045385Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:05:14.3045792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:05:14.3046158Z return mod(**inputs) 2025-09-07T08:05:14.3046538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3046997Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3047462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-09-07T08:05:14.3047917Z outputs = self.layoutlm( 2025-09-07T08:05:14.3048330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:05:14.3048732Z return func(*args, **kwargs) 2025-09-07T08:05:14.3049133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:05:14.3049542Z return func(*args, **kwargs) 2025-09-07T08:05:14.3049911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3050294Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3050724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-09-07T08:05:14.3051214Z pooled_output = self.pooler(sequence_output) 2025-09-07T08:05:14.3051647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 430, in forward 2025-09-07T08:05:14.3052075Z pooled_output = self.dense(first_token_tensor) 2025-09-07T08:05:14.3052230Z 2025-09-07T08:05:14.3052350Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:05:14.3052715Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:05:14.3053055Z return mod(**inputs) 2025-09-07T08:05:14.3053392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3053756Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3054160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 875, in forward 2025-09-07T08:05:14.3054571Z outputs = self.layoutlm( 2025-09-07T08:05:14.3054943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:05:14.3055322Z return func(*args, **kwargs) 2025-09-07T08:05:14.3055689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:05:14.3056112Z return func(*args, **kwargs) 2025-09-07T08:05:14.3056459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3056909Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3057331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 654, in forward 2025-09-07T08:05:14.3057761Z pooled_output = self.pooler(sequence_output) 2025-09-07T08:05:14.3058184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 431, in forward 2025-09-07T08:05:14.3058620Z pooled_output = self.activation(pooled_output) 2025-09-07T08:05:14.3058832Z 2025-09-07T08:05:14.3058959Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:05:14.3059332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:05:14.3059665Z return mod(**inputs) 2025-09-07T08:05:14.3059994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3060377Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3060789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 891, in forward 2025-09-07T08:05:14.3061218Z logits = self.classifier(pooled_output) 2025-09-07T08:05:14.3061362Z 2025-09-07T08:05:14.3061469Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:05:14.3061836Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:05:14.3062171Z return mod(**inputs) 2025-09-07T08:05:14.3062505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3062865Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3063276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-09-07T08:05:14.3063735Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:05:14.3063927Z 2025-09-07T08:05:14.3064031Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:05:14.3064389Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:05:14.3064710Z return mod(**inputs) 2025-09-07T08:05:14.3065028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3065378Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3065777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-09-07T08:05:14.3066236Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:05:14.3066415Z 2025-09-07T08:05:14.3066517Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:05:14.3066870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:05:14.3067195Z return mod(**inputs) 2025-09-07T08:05:14.3067518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:05:14.3067867Z output = func(self, *args, **kwargs) 2025-09-07T08:05:14.3068254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/layoutlm/modeling_layoutlm.py", line 911, in forward 2025-09-07T08:05:14.3068697Z loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) 2025-09-07T08:05:14.3068882Z 2025-09-07T08:05:16.9158635Z Compilation time (from dynamo_timed): 47.824714957 2025-09-07T08:05:16.9159016Z pass 2025-09-07T08:05:16.9159359Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:05:16.9160531Z TIMING: _recursive_pre_grad_passes:0.08316 _recursive_joint_graph_passes:0.85473 _recursive_post_grad_passes:0.13063 linear_unary_template_precompiling:2.85584 linear_unary_template_autotuning:0.50022 async_compile.wait:0.86703 code_gen:5.42427 inductor_compile:29.88203 backend_compile:40.99786 gc:0.00315 entire_frame_compile:47.82471 total_wall_time:47.82471 2025-09-07T08:05:16.9162020Z STATS: call_* op count: 864 | FakeTensorMode.__torch_dispatch__:58982 | FakeTensor.__torch_dispatch__:5857 | ProxyTorchDispatchMode.__torch_dispatch__:16959 2025-09-07T08:05:16.9162569Z Dynamo produced 2 graphs covering 864 ops with 0 graph breaks (0 unique) 2025-09-07T08:05:19.9949201Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:05:19.9950260Z import pynvml # type: ignore[import] 2025-09-07T08:05:22.6856305Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:05:22.6857353Z from pkg_resources import resource_filename 2025-09-07T08:05:23.3692374Z 2025-09-07T08:05:30.0975684Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:05:30.0976004Z loading model: 0it [00:06, ?it/s] 2025-09-07T08:05:30.0976260Z cpu eval M2M100ForConditionalGeneration 2025-09-07T08:05:30.9849084Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:05:31.4149105Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:05:31.7954390Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:06:01.9333443Z Autotune Choices Stats: 2025-09-07T08:06:01.9333947Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.02991049996126094} 2025-09-07T08:06:01.9346773Z AUTOTUNE linear_unary(128x1024, 1024x1024, 1024) 2025-09-07T08:06:01.9349864Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:06:01.9350147Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:06:01.9350449Z cpp_CppMicroGemmAMX_0 0.0299 ms 100.0% 2025-09-07T08:06:01.9350714Z _linear_pointwise 0.0770 ms 38.8% 2025-09-07T08:06:01.9351121Z SingleProcess AUTOTUNE benchmarking takes 0.2587 seconds and 1.4424 seconds precompiling for 2 choices 2025-09-07T08:06:04.0212610Z Autotune Choices Stats: 2025-09-07T08:06:04.0213337Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.07936300016808673} 2025-09-07T08:06:04.0223663Z AUTOTUNE linear_unary(128x1024, 4096x1024, 4096) 2025-09-07T08:06:04.0224304Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:06:04.0231986Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:06:04.0236331Z cpp_CppMicroGemmAMX_4 0.0794 ms 100.0% 2025-09-07T08:06:04.0236654Z _linear_pointwise 0.1187 ms 66.8% 2025-09-07T08:06:04.0237045Z SingleProcess AUTOTUNE benchmarking takes 0.2688 seconds and 1.4387 seconds precompiling for 2 choices 2025-09-07T08:06:05.7994149Z Autotune Choices Stats: 2025-09-07T08:06:05.7994915Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.06286500001806417} 2025-09-07T08:06:05.8005450Z AUTOTUNE linear_unary(128x4096, 1024x4096, 1024) 2025-09-07T08:06:05.8005816Z strides: [4096, 1], [1, 0], [1] 2025-09-07T08:06:05.8006209Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:06:05.8006599Z cpp_CppMicroGemmAMX_5 0.0629 ms 100.0% 2025-09-07T08:06:05.8007117Z _linear_pointwise 0.1251 ms 50.2% 2025-09-07T08:06:05.8007660Z SingleProcess AUTOTUNE benchmarking takes 0.2687 seconds and 1.4252 seconds precompiling for 2 choices 2025-09-07T08:06:23.6399230Z Autotune Choices Stats: 2025-09-07T08:06:23.6401844Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_192", "best_time": 3.159266500006197} 2025-09-07T08:06:23.6408106Z AUTOTUNE linear_unary(128x1024, 128112x1024) 2025-09-07T08:06:23.6408394Z strides: [1024, 1], [1, 0] 2025-09-07T08:06:23.6408635Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:06:23.6408887Z cpp_CppMicroGemmAMX_192 3.1593 ms 100.0% 2025-09-07T08:06:23.6409123Z _linear_pointwise 3.7638 ms 83.9% 2025-09-07T08:06:23.6409498Z SingleProcess AUTOTUNE benchmarking takes 0.7239 seconds and 1.3928 seconds precompiling for 2 choices 2025-09-07T08:06:25.1893019Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1893577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1893936Z return mod(**inputs) 2025-09-07T08:06:25.1894401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1894828Z outputs = self.model( 2025-09-07T08:06:25.1895223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1895635Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1896039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-09-07T08:06:25.1896532Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-09-07T08:06:25.1897021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-09-07T08:06:25.1897422Z return func(*args, **kwargs) 2025-09-07T08:06:25.1897858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-09-07T08:06:25.1898458Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-09-07T08:06:25.1899063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 80, in create_position_ids_from_input_ids 2025-09-07T08:06:25.1899522Z mask = input_ids.ne(padding_idx).int() 2025-09-07T08:06:25.1899673Z 2025-09-07T08:06:25.1899760Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1899982Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1900198Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1900405Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1900620Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1900835Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1901048Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1901250Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1901464Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1901679Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1901909Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1902129Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1902394Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1902779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1903124Z return mod(**inputs) 2025-09-07T08:06:25.1903506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1903943Z outputs = self.model( 2025-09-07T08:06:25.1904350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1904753Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1905197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-09-07T08:06:25.1905655Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-09-07T08:06:25.1906166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-09-07T08:06:25.1906605Z return func(*args, **kwargs) 2025-09-07T08:06:25.1907005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-09-07T08:06:25.1907536Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-09-07T08:06:25.1908175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 81, in create_position_ids_from_input_ids 2025-09-07T08:06:25.1908751Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T08:06:25.1909053Z 2025-09-07T08:06:25.1909174Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1909586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1909963Z return mod(**inputs) 2025-09-07T08:06:25.1910408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1910804Z outputs = self.model( 2025-09-07T08:06:25.1911178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1911576Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1912090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 844, in forward 2025-09-07T08:06:25.1912538Z embed_pos = self.embed_positions(input_ids, inputs_embeds) 2025-09-07T08:06:25.1912953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context 2025-09-07T08:06:25.1913326Z return func(*args, **kwargs) 2025-09-07T08:06:25.1913722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 148, in forward 2025-09-07T08:06:25.1914246Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length).to( 2025-09-07T08:06:25.1914839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 81, in create_position_ids_from_input_ids 2025-09-07T08:06:25.1915407Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T08:06:25.1915659Z 2025-09-07T08:06:25.1915749Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1915973Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1916185Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1916395Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1916603Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1916811Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1917015Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1917259Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1917666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1918001Z return mod(**inputs) 2025-09-07T08:06:25.1918372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1918760Z outputs = self.model( 2025-09-07T08:06:25.1919139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1919534Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1919935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1920330Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1920691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1921121Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1921532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1921970Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1922399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1922910Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1923405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.1923927Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.1924123Z 2025-09-07T08:06:25.1924245Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1924633Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1925015Z return mod(**inputs) 2025-09-07T08:06:25.1925426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1925851Z outputs = self.model( 2025-09-07T08:06:25.1926261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1926695Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1927226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1927667Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1928064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1928457Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1928886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1929316Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1929734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1930156Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1930607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.1931091Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.1931276Z 2025-09-07T08:06:25.1931364Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1931597Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1931824Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1932046Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1932272Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1932497Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1932718Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1933044Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1933268Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1933489Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1933745Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1934119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1934474Z return mod(**inputs) 2025-09-07T08:06:25.1934872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1935291Z outputs = self.model( 2025-09-07T08:06:25.1935692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1936152Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1936566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1936984Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1937366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1937753Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1938217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1938653Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1939089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1939530Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1940006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.1940520Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.1940725Z 2025-09-07T08:06:25.1940838Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1941228Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1941577Z return mod(**inputs) 2025-09-07T08:06:25.1941962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1942377Z outputs = self.model( 2025-09-07T08:06:25.1942771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1943208Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1943620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1944004Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1944360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1944755Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1945499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1945944Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1946366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1946791Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1947243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.1947711Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.1947879Z 2025-09-07T08:06:25.1947965Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1948186Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1948404Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1948618Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1948820Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1949032Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1949244Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1949451Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1949652Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1949906Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1950299Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1950730Z return mod(**inputs) 2025-09-07T08:06:25.1951141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1951531Z outputs = self.model( 2025-09-07T08:06:25.1951908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1952308Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1952757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1953149Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1953512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1953885Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1954284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1954699Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1955106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1955525Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1955979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.1956537Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.1956733Z 2025-09-07T08:06:25.1956854Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1957238Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1957594Z return mod(**inputs) 2025-09-07T08:06:25.1957990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1958383Z outputs = self.model( 2025-09-07T08:06:25.1958749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1959144Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1959530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1959921Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1960281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1960645Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1961045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1961456Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1961870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1962275Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1962713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.1963167Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.1963346Z 2025-09-07T08:06:25.1963430Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1963649Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1963862Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1964068Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1964277Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1964487Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1964700Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1964931Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1965159Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1965368Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1965606Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1965975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1966395Z return mod(**inputs) 2025-09-07T08:06:25.1967065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1967504Z outputs = self.model( 2025-09-07T08:06:25.1967935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1968377Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1968806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1969236Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1969583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1969948Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1970349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1970752Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1971190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1971626Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1972110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.1972634Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.1972834Z 2025-09-07T08:06:25.1972953Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1973339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1973669Z return mod(**inputs) 2025-09-07T08:06:25.1974040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1974449Z outputs = self.model( 2025-09-07T08:06:25.1974827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1975219Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1975612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1975994Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1976356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1976741Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1977172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1977586Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1978001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1978421Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1978870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.1979339Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.1979514Z 2025-09-07T08:06:25.1979596Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1979874Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1980104Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1980332Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1980544Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1980754Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1980962Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1981164Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1981376Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1982457Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1982828Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1983149Z return mod(**inputs) 2025-09-07T08:06:25.1983516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1983896Z outputs = self.model( 2025-09-07T08:06:25.1984262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1984650Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1985024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1985417Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1985778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1986180Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1986602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1987025Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1987457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1987900Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1988349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.1988830Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.1989021Z 2025-09-07T08:06:25.1989128Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1989499Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1989831Z return mod(**inputs) 2025-09-07T08:06:25.1990204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1990605Z outputs = self.model( 2025-09-07T08:06:25.1990994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.1991389Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.1991773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.1992161Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.1992509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.1992883Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.1993284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.1993693Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.1994092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.1994504Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.1994980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.1995464Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.1995630Z 2025-09-07T08:06:25.1995721Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1995932Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1996146Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1996384Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1996632Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1996838Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1997050Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1997321Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1997533Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1997737Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.1997980Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.1998353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.1998690Z return mod(**inputs) 2025-09-07T08:06:25.1999059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.1999487Z outputs = self.model( 2025-09-07T08:06:25.1999892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2000323Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2000734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2001143Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2001528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2001918Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2002338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2002771Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2003195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2003635Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2004114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2004625Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2004821Z 2025-09-07T08:06:25.2004941Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2005325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2005678Z return mod(**inputs) 2025-09-07T08:06:25.2006072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2006498Z outputs = self.model( 2025-09-07T08:06:25.2006953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2007394Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2007816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2008248Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2008606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2008971Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2009371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2009860Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2010270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2010685Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2011128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2011625Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2011800Z 2025-09-07T08:06:25.2011884Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2012105Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2012312Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2012529Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2012746Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2012972Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2013172Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2013377Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2013583Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2013823Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2014193Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2014519Z return mod(**inputs) 2025-09-07T08:06:25.2014894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2015282Z outputs = self.model( 2025-09-07T08:06:25.2015662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2016038Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2016415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2016796Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2017141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2017520Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2017909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2018323Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2018730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2019142Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2019583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2020069Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2020254Z 2025-09-07T08:06:25.2020358Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2020715Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2021037Z return mod(**inputs) 2025-09-07T08:06:25.2021393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2021776Z outputs = self.model( 2025-09-07T08:06:25.2022136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2022523Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2022903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2023307Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2023686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2024061Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2024460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2024877Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2025301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2025705Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2026143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2026591Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2026755Z 2025-09-07T08:06:25.2026843Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2027049Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2027256Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2027463Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2027672Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2027870Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2028074Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2028280Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2028487Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2028687Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2028920Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2029279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2029604Z return mod(**inputs) 2025-09-07T08:06:25.2029960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2030341Z outputs = self.model( 2025-09-07T08:06:25.2030703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2031090Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2031468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2031843Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2032197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2032558Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2032942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2033334Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2033739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2034140Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2034582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2035037Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2035212Z 2025-09-07T08:06:25.2035316Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2035663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2035973Z return mod(**inputs) 2025-09-07T08:06:25.2036325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2036693Z outputs = self.model( 2025-09-07T08:06:25.2037087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2037460Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2037828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2038196Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2038528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2038911Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2039290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2039687Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2040082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2040477Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2040911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2041356Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2041516Z 2025-09-07T08:06:25.2041604Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2041817Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2042023Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2042228Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2042437Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2042649Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2042850Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2043061Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2043275Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2043515Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2043880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2044202Z return mod(**inputs) 2025-09-07T08:06:25.2044563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2044994Z outputs = self.model( 2025-09-07T08:06:25.2045559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2045959Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2046356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2046753Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2047209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2047611Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2048056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2048492Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2048934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2049366Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2049800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2050279Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2050471Z 2025-09-07T08:06:25.2050575Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2050938Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2051352Z return mod(**inputs) 2025-09-07T08:06:25.2051719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2052111Z outputs = self.model( 2025-09-07T08:06:25.2052483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2052910Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2053358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2053752Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2054108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2054484Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2054871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2055259Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2055654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2056054Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2056495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2056941Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2057100Z 2025-09-07T08:06:25.2057180Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2057391Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2057603Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2057811Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2058017Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2058228Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2058438Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2058647Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2058851Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2059064Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2059305Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2059676Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2060009Z return mod(**inputs) 2025-09-07T08:06:25.2060383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2060757Z outputs = self.model( 2025-09-07T08:06:25.2061120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2061504Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2061872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2062252Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2062599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2062956Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2063342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2063735Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2064133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2064532Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2064995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2065480Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2065665Z 2025-09-07T08:06:25.2065768Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2066121Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2066438Z return mod(**inputs) 2025-09-07T08:06:25.2066819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2067178Z outputs = self.model( 2025-09-07T08:06:25.2067530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2067900Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2068265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2068632Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2068968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2069323Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2069709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2070106Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2070495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2070906Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2071361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2071812Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2071971Z 2025-09-07T08:06:25.2072058Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2072263Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2072471Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2072685Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2072885Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2073077Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2073283Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2073485Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2073684Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2073905Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2074253Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2074569Z return mod(**inputs) 2025-09-07T08:06:25.2074924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2075293Z outputs = self.model( 2025-09-07T08:06:25.2075639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2076011Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2076378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2076748Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2077085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2077425Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2077801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2078225Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2078625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2079021Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2079463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2079973Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2080218Z 2025-09-07T08:06:25.2080340Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2080729Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2081072Z return mod(**inputs) 2025-09-07T08:06:25.2081467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2081882Z outputs = self.model( 2025-09-07T08:06:25.2082276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2082691Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2083090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2083502Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2083883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2084270Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2084683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2085111Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2085541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2085980Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2086452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2087053Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2087244Z 2025-09-07T08:06:25.2087333Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2087565Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2087794Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2088019Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2088235Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2088458Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2088683Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2088905Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2089127Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2089350Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2089603Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2089991Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2090334Z return mod(**inputs) 2025-09-07T08:06:25.2090727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2091142Z outputs = self.model( 2025-09-07T08:06:25.2091534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2091965Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2092374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2092809Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2093212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2093628Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2094036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2094443Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2094885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2095299Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2095755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2096230Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2096422Z 2025-09-07T08:06:25.2096530Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2096899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2097232Z return mod(**inputs) 2025-09-07T08:06:25.2097599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2097990Z outputs = self.model( 2025-09-07T08:06:25.2098361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1261, in forward 2025-09-07T08:06:25.2098752Z encoder_outputs = self.encoder( 2025-09-07T08:06:25.2099136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 878, in forward 2025-09-07T08:06:25.2099516Z layer_outputs = encoder_layer( 2025-09-07T08:06:25.2099870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2100240Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2100636Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 378, in forward 2025-09-07T08:06:25.2101039Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:06:25.2101437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2101848Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2102292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2102750Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2102913Z 2025-09-07T08:06:25.2103000Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2103209Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2103424Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2103639Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2103849Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2104052Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2104264Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2104476Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2104686Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2104917Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2105292Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2105624Z return mod(**inputs) 2025-09-07T08:06:25.2105997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2106379Z outputs = self.model( 2025-09-07T08:06:25.2106750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2107186Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2107581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2107981Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2108339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2108716Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2109151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2109552Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2109951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2110344Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2110777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2111252Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2111436Z 2025-09-07T08:06:25.2111547Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2111904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2112225Z return mod(**inputs) 2025-09-07T08:06:25.2112589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2112978Z outputs = self.model( 2025-09-07T08:06:25.2113333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2113699Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2114072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2114450Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2114791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2115151Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2115543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2115951Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2116365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2116760Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2117189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2117626Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2117787Z 2025-09-07T08:06:25.2117864Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2118072Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2118279Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2118481Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2118688Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2118895Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2119101Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2119298Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2119535Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2119892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2120219Z return mod(**inputs) 2025-09-07T08:06:25.2120613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2120993Z outputs = self.model( 2025-09-07T08:06:25.2121357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2121739Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2122155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2122539Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2122899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2123269Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2123665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2124096Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2124514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2124947Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2125404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2125898Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2126090Z 2025-09-07T08:06:25.2126209Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2126581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2127009Z return mod(**inputs) 2025-09-07T08:06:25.2127415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2127847Z outputs = self.model( 2025-09-07T08:06:25.2128247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2128700Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2129142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2129548Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2129918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2130288Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2130750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2131176Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2131608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2132024Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2132469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2132936Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2133111Z 2025-09-07T08:06:25.2133193Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2133416Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2133624Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2134005Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2134215Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2134427Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2134638Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2134868Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2135098Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2135307Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2135545Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2135910Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2136244Z return mod(**inputs) 2025-09-07T08:06:25.2136618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2137091Z outputs = self.model( 2025-09-07T08:06:25.2137458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2137856Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2138245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2138639Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2138998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2139364Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2139767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2140187Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2140609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2141026Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2141469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2141956Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2142145Z 2025-09-07T08:06:25.2142252Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2142612Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2142927Z return mod(**inputs) 2025-09-07T08:06:25.2143291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2143670Z outputs = self.model( 2025-09-07T08:06:25.2144034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2144419Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2144825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2145392Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2145746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2146118Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2146508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2146908Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2147315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2147728Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2148167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2148621Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2148781Z 2025-09-07T08:06:25.2148862Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2149117Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2149349Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2149557Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2149759Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2149970Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2150181Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2150428Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2150781Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2151152Z return mod(**inputs) 2025-09-07T08:06:25.2151519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2151900Z outputs = self.model( 2025-09-07T08:06:25.2152255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2152640Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2153017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2153402Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2153753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2154115Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2154516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2154947Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2155363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2155773Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2156214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2156699Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2156892Z 2025-09-07T08:06:25.2156998Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2157366Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2157700Z return mod(**inputs) 2025-09-07T08:06:25.2158066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2158458Z outputs = self.model( 2025-09-07T08:06:25.2158829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2159222Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2159602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2159996Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2160356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2160731Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2161129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2161550Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2161976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2162391Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2162843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2163339Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2163505Z 2025-09-07T08:06:25.2163586Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2163807Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2164019Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2164232Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2164441Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2164663Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2164913Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2165137Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2165352Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2165577Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2165830Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2166220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2166583Z return mod(**inputs) 2025-09-07T08:06:25.2167096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2167557Z outputs = self.model( 2025-09-07T08:06:25.2167953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2168368Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2168757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2169157Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2169522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2169896Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2170297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2170712Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2171135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2171551Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2172001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2172488Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2172674Z 2025-09-07T08:06:25.2172782Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2173150Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2173495Z return mod(**inputs) 2025-09-07T08:06:25.2173856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2174232Z outputs = self.model( 2025-09-07T08:06:25.2174592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2174974Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2175347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2175732Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2176077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2176440Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2176830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2177226Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2177665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2178050Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2178477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2178917Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2179070Z 2025-09-07T08:06:25.2179185Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2179388Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2179598Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2179804Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2180013Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2180211Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2180418Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2180626Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2180859Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2181215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2181532Z return mod(**inputs) 2025-09-07T08:06:25.2181903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2182272Z outputs = self.model( 2025-09-07T08:06:25.2182623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2182985Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2183355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2183727Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2184065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2184419Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2184797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2185228Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2185656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2186050Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2186475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2186923Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2187108Z 2025-09-07T08:06:25.2187210Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2187559Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2187876Z return mod(**inputs) 2025-09-07T08:06:25.2188221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2188593Z outputs = self.model( 2025-09-07T08:06:25.2188946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2189325Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2189692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2190067Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2190417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2190825Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2191220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2191641Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2192051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2192463Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2192944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2193398Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2193558Z 2025-09-07T08:06:25.2193647Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2193854Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2194067Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2194276Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2194483Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2194681Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2194888Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2195094Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2195298Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2195525Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2195889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2196218Z return mod(**inputs) 2025-09-07T08:06:25.2196581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2196955Z outputs = self.model( 2025-09-07T08:06:25.2197318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2197761Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2198148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2198545Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2198888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2199251Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2199644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2200061Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2200486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2200896Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2201347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2201836Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2202024Z 2025-09-07T08:06:25.2202138Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2202506Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2202843Z return mod(**inputs) 2025-09-07T08:06:25.2203233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2203624Z outputs = self.model( 2025-09-07T08:06:25.2203989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2204369Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2204776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2205170Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2205529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2205898Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2206317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2206736Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2207255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2207675Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2208120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2208593Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2208767Z 2025-09-07T08:06:25.2208851Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2209074Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2209293Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2209502Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2209716Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2209930Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2210146Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2210352Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2210596Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2210970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2211306Z return mod(**inputs) 2025-09-07T08:06:25.2211672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2212068Z outputs = self.model( 2025-09-07T08:06:25.2212441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2212839Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2213227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2213618Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2213978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2214349Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2214746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2215176Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2215597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2216015Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2216467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2216949Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2217138Z 2025-09-07T08:06:25.2217253Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2217617Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2217950Z return mod(**inputs) 2025-09-07T08:06:25.2218325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2218760Z outputs = self.model( 2025-09-07T08:06:25.2219117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2219494Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2219885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2220281Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2220670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2221039Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2221447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2221853Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2222261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2222664Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2223092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2223540Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2223704Z 2025-09-07T08:06:25.2223783Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2224005Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2224202Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2224404Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2224603Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2224804Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2224997Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2225200Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2225404Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2225633Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2225975Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2226290Z return mod(**inputs) 2025-09-07T08:06:25.2226647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2227015Z outputs = self.model( 2025-09-07T08:06:25.2227367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2227733Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2228106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2228489Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2228837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2229198Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2229581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2229975Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2230367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2230762Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2231176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2231633Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2231816Z 2025-09-07T08:06:25.2231917Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2232326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2232640Z return mod(**inputs) 2025-09-07T08:06:25.2232989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2233361Z outputs = self.model( 2025-09-07T08:06:25.2233716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2234119Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2234498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2234883Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2235227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2235583Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2235959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2236349Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2236744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2237138Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2237564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2238007Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2238165Z 2025-09-07T08:06:25.2238246Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2238459Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2238669Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2238878Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2239076Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2239280Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2239484Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2239690Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2239930Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2240290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2240635Z return mod(**inputs) 2025-09-07T08:06:25.2241000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2241378Z outputs = self.model( 2025-09-07T08:06:25.2241731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2242116Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2242499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2242883Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2243241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2243604Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2244006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2244456Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2244899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2245458Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2245914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2246508Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2246714Z 2025-09-07T08:06:25.2246832Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2247346Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2247707Z return mod(**inputs) 2025-09-07T08:06:25.2248164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2248583Z outputs = self.model( 2025-09-07T08:06:25.2248988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2249402Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2249814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2250197Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2250537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2250895Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2251285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2251698Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2252116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2252521Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2252961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2253414Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2253577Z 2025-09-07T08:06:25.2253656Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2253866Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2254075Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2254281Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2254480Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2254698Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2254901Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2255108Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2255306Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2255512Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2255747Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2256106Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2256428Z return mod(**inputs) 2025-09-07T08:06:25.2256795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2257175Z outputs = self.model( 2025-09-07T08:06:25.2257536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2257922Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2258296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2258678Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2259027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2259386Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2259769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2259918Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2260170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2260275Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2260560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2260722Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2260727Z 2025-09-07T08:06:25.2260833Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2261029Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2261104Z return mod(**inputs) 2025-09-07T08:06:25.2261355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2261430Z outputs = self.model( 2025-09-07T08:06:25.2261685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2261758Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2262012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2262084Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2262309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2262390Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2262641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2262740Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2262988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2263090Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2263369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2263485Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2263488Z 2025-09-07T08:06:25.2263569Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2263646Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2263730Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2263806Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2263888Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2263963Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2264037Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2264152Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2264350Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2264424Z return mod(**inputs) 2025-09-07T08:06:25.2264673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2264741Z outputs = self.model( 2025-09-07T08:06:25.2264999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2265074Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2265326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2265398Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2265615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2265735Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2265979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2266097Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2266342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2266446Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2266756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2266886Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2266889Z 2025-09-07T08:06:25.2267000Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2267197Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2267274Z return mod(**inputs) 2025-09-07T08:06:25.2267523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2267591Z outputs = self.model( 2025-09-07T08:06:25.2267897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2267972Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2268226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2268301Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2268523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2268603Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2268848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2268966Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2269210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2269310Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2269592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2269698Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2269708Z 2025-09-07T08:06:25.2269787Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2269864Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2269949Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270024Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270101Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270184Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270258Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270340Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270413Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270488Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2270598Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2270799Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2270872Z return mod(**inputs) 2025-09-07T08:06:25.2271119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2271187Z outputs = self.model( 2025-09-07T08:06:25.2271448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2271549Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2271795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2271866Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2272084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2272164Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2272445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2272550Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2272791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2272892Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2273170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2273295Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2273306Z 2025-09-07T08:06:25.2273408Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2273600Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2273671Z return mod(**inputs) 2025-09-07T08:06:25.2273916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2273988Z outputs = self.model( 2025-09-07T08:06:25.2274228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2274300Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2274548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2274619Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2274838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2274914Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2275156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2275260Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2275498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2275596Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2275868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2275979Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2275983Z 2025-09-07T08:06:25.2276060Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276136Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276218Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276291Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276371Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276444Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276519Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276599Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2276698Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2276898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2276962Z return mod(**inputs) 2025-09-07T08:06:25.2277205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2277312Z outputs = self.model( 2025-09-07T08:06:25.2277555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2277633Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2277875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2277979Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2278200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2278279Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2278525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2278631Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2278876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2278977Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2279248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2279381Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2279384Z 2025-09-07T08:06:25.2279488Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2279689Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2279753Z return mod(**inputs) 2025-09-07T08:06:25.2279996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2280072Z outputs = self.model( 2025-09-07T08:06:25.2280313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2280393Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2280630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2280701Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2280921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2281001Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2281245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2281354Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2281605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2281704Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2281986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2282099Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2282102Z 2025-09-07T08:06:25.2282180Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282265Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282342Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282416Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282509Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282583Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282664Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282737Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282828Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2282951Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2283142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2283213Z return mod(**inputs) 2025-09-07T08:06:25.2283465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2283532Z outputs = self.model( 2025-09-07T08:06:25.2283814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2283889Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2284139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2284211Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2284423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2284509Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2284761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2284865Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2285112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2285215Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2285496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2285622Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2285626Z 2025-09-07T08:06:25.2285736Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2285935Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2286007Z return mod(**inputs) 2025-09-07T08:06:25.2286257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2286324Z outputs = self.model( 2025-09-07T08:06:25.2286577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2286656Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2286992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2287078Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2287311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2287396Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2287651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2287760Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2288012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2288117Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2288408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2288528Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2288539Z 2025-09-07T08:06:25.2288618Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2288696Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2288782Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2288896Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2288971Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2289052Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2289126Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2289208Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2289310Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2289504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2289576Z return mod(**inputs) 2025-09-07T08:06:25.2289855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2289930Z outputs = self.model( 2025-09-07T08:06:25.2290179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2290258Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2290511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2290584Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2290808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2290889Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2291143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2291252Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2291497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2291600Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2291883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2292020Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2292024Z 2025-09-07T08:06:25.2292127Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2292327Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2292395Z return mod(**inputs) 2025-09-07T08:06:25.2292645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2292723Z outputs = self.model( 2025-09-07T08:06:25.2292973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2293054Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2293301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2293377Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2293605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2293686Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2293940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2294047Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2294295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2294398Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2294678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2294814Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2294834Z 2025-09-07T08:06:25.2294913Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2294997Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295075Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295150Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295233Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295307Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295387Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295508Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295585Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2295695Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2295887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2295959Z return mod(**inputs) 2025-09-07T08:06:25.2296208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2296278Z outputs = self.model( 2025-09-07T08:06:25.2296533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2296606Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2296857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2296951Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2297172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2297259Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2297506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2297611Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2297858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2297952Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2298239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2298368Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2298372Z 2025-09-07T08:06:25.2298485Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2298682Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2298755Z return mod(**inputs) 2025-09-07T08:06:25.2299004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2299074Z outputs = self.model( 2025-09-07T08:06:25.2299332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2299407Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2299662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2299734Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2299953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2300040Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2300284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2300386Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2300633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2300770Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2301050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2301156Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2301160Z 2025-09-07T08:06:25.2301244Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301320Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301433Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301510Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301585Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301669Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301743Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301826Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2301938Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2302134Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2302205Z return mod(**inputs) 2025-09-07T08:06:25.2302449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2302522Z outputs = self.model( 2025-09-07T08:06:25.2302772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2302848Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2303105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2303187Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2303404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2303486Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2303726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2303842Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2304088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2304189Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2304481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2304613Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2304616Z 2025-09-07T08:06:25.2304716Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2304906Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2304982Z return mod(**inputs) 2025-09-07T08:06:25.2305226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2305298Z outputs = self.model( 2025-09-07T08:06:25.2305540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2305620Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2305861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2305931Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2306150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2306228Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2306476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2306613Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2306853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2306956Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2307257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2307370Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2307374Z 2025-09-07T08:06:25.2307451Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2307526Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2307608Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2307681Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2307764Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2307839Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2307911Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2307994Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2308069Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2308160Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2308264Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2308461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2308537Z return mod(**inputs) 2025-09-07T08:06:25.2308792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2308865Z outputs = self.model( 2025-09-07T08:06:25.2309110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2309185Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2309432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2309502Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2309720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2309798Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2310050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2310146Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2310385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2310485Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2310760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2310893Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2310896Z 2025-09-07T08:06:25.2310997Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2311199Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2311265Z return mod(**inputs) 2025-09-07T08:06:25.2311517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2311594Z outputs = self.model( 2025-09-07T08:06:25.2311851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2311930Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2312169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2312281Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2312507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2312584Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2312840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2312967Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2313212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2313312Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2313585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2313697Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2313700Z 2025-09-07T08:06:25.2313776Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2313857Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2313929Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2314001Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2314082Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2314155Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2314238Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2314338Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2314526Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2314596Z return mod(**inputs) 2025-09-07T08:06:25.2314843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2314920Z outputs = self.model( 2025-09-07T08:06:25.2315167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2315241Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2315494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2315566Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2315790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2315869Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2316117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2316231Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2316475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2316580Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2316862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2316995Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2316999Z 2025-09-07T08:06:25.2317102Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2317297Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2317372Z return mod(**inputs) 2025-09-07T08:06:25.2317622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2317697Z outputs = self.model( 2025-09-07T08:06:25.2317957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2318072Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2318319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2318389Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2318608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2318716Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2318963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2319069Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2319308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2319412Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2319688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2319799Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2319802Z 2025-09-07T08:06:25.2319879Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2319954Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320037Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320114Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320196Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320268Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320343Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320424Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320497Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320578Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2320680Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2320870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2320944Z return mod(**inputs) 2025-09-07T08:06:25.2321190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2321264Z outputs = self.model( 2025-09-07T08:06:25.2321516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2321591Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2321844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2321914Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2322137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2322218Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2322465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2322570Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2322817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2322921Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2323202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2323337Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2323340Z 2025-09-07T08:06:25.2323442Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2323655Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2323747Z return mod(**inputs) 2025-09-07T08:06:25.2323994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2324069Z outputs = self.model( 2025-09-07T08:06:25.2324316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2324390Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2324683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2324758Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2324983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2325063Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2325317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2325414Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2325659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2325762Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2326043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2326158Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2326162Z 2025-09-07T08:06:25.2326242Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2326318Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2326401Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2326476Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2326562Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2326635Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2326710Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2326796Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2327030Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2327343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2327428Z return mod(**inputs) 2025-09-07T08:06:25.2327817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2327911Z outputs = self.model( 2025-09-07T08:06:25.2328279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2328372Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2328611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2328693Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2328908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2328987Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2329245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2329353Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2329605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2329696Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2329977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2330148Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2330151Z 2025-09-07T08:06:25.2330255Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2330465Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2330532Z return mod(**inputs) 2025-09-07T08:06:25.2330829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2330901Z outputs = self.model( 2025-09-07T08:06:25.2331156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2331238Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2331493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2331576Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2331797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2331879Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2332145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2332248Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2332495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2332587Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2332859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2332969Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2332975Z 2025-09-07T08:06:25.2333052Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333135Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333208Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333290Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333365Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333437Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333517Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333590Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333665Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2333772Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2333963Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2334034Z return mod(**inputs) 2025-09-07T08:06:25.2334272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2334349Z outputs = self.model( 2025-09-07T08:06:25.2334589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2334662Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2334908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2334978Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2335195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2335278Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2335524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2335630Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2335919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2336019Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2336292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2336422Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2336425Z 2025-09-07T08:06:25.2336577Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2336768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2336840Z return mod(**inputs) 2025-09-07T08:06:25.2337081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2337154Z outputs = self.model( 2025-09-07T08:06:25.2337395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2337466Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2337716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2337786Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2338001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2338080Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2338322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 473, in forward 2025-09-07T08:06:25.2338426Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:06:25.2338673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2338777Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2339065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2339175Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2339178Z 2025-09-07T08:06:25.2339256Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339331Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339416Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339493Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339575Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339650Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339724Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339804Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2339906Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2340112Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2340178Z return mod(**inputs) 2025-09-07T08:06:25.2340426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2340502Z outputs = self.model( 2025-09-07T08:06:25.2340747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2340828Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2341073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2341143Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2341366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2341473Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2341743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2341850Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2342103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2342197Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2342506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:06:25.2342643Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:06:25.2342646Z 2025-09-07T08:06:25.2342751Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2342955Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2343022Z return mod(**inputs) 2025-09-07T08:06:25.2343271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1404, in forward 2025-09-07T08:06:25.2343345Z outputs = self.model( 2025-09-07T08:06:25.2343592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1279, in forward 2025-09-07T08:06:25.2343673Z decoder_outputs = self.decoder( 2025-09-07T08:06:25.2343922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1127, in forward 2025-09-07T08:06:25.2344002Z layer_outputs = decoder_layer( 2025-09-07T08:06:25.2344220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:06:25.2344298Z return super().__call__(*args, **kwargs) 2025-09-07T08:06:25.2344553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 490, in forward 2025-09-07T08:06:25.2344662Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:06:25.2344916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 319, in forward 2025-09-07T08:06:25.2345165Z attn_output, attn_weights = attention_interface( 2025-09-07T08:06:25.2345461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:06:25.2345580Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:06:25.2345584Z 2025-09-07T08:06:25.2345664Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2345752Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2345828Z cudagraph partition due to non gpu ops 2025-09-07T08:06:25.2345931Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:06:25.2346137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:06:25.2346205Z return mod(**inputs) 2025-09-07T08:06:25.2346463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/m2m_100/modeling_m2m_100.py", line 1429, in forward 2025-09-07T08:06:25.2346634Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:06:25.2346638Z 2025-09-07T08:06:46.3295166Z Compilation time (from dynamo_timed): 73.188190373 2025-09-07T08:06:46.3333284Z pass 2025-09-07T08:06:46.3333841Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:06:46.3334938Z TIMING: _recursive_pre_grad_passes:0.09047 _recursive_joint_graph_passes:0.82408 _recursive_post_grad_passes:0.14844 linear_unary_template_precompiling:5.73214 linear_unary_template_autotuning:1.51223 async_compile.wait:0.8539 code_gen:20.78831 inductor_compile:54.23833 backend_compile:67.37111 gc:0.0008 entire_frame_compile:73.18819 total_wall_time:73.18819 2025-09-07T08:06:46.3336434Z STATS: call_* op count: 1016 | FakeTensorMode.__torch_dispatch__:68435 | FakeTensor.__torch_dispatch__:7356 | ProxyTorchDispatchMode.__torch_dispatch__:19125 2025-09-07T08:06:46.3336991Z Dynamo produced 1 graphs covering 1016 ops with 0 graph breaks (0 unique) 2025-09-07T08:06:50.0089081Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:06:50.0089910Z import pynvml # type: ignore[import] 2025-09-07T08:06:52.6929588Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:06:52.6930701Z from pkg_resources import resource_filename 2025-09-07T08:06:53.3573831Z 2025-09-07T08:06:55.8767551Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:06:55.8767977Z loading model: 0it [00:02, ?it/s] 2025-09-07T08:06:55.8768298Z cpu eval MBartForCausalLM 2025-09-07T08:06:57.4552878Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:06:57.7897298Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:06:58.1146138Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:07:21.5159789Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5160126Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5160365Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5160595Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5160862Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5161082Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5161312Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5161538Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5161788Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5161995Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5162198Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5162422Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5162783Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5163019Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5163245Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5163480Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5163708Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5163940Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5164162Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5164490Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5164995Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5165385Z return mod(**inputs) 2025-09-07T08:07:21.5165827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5166285Z outputs = self.model.decoder( 2025-09-07T08:07:21.5166743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5167419Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5167837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5168281Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5168711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5169581Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5170036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5170496Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5170971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5171610Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5171818Z 2025-09-07T08:07:21.5171930Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5172315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5172647Z return mod(**inputs) 2025-09-07T08:07:21.5173024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5173440Z outputs = self.model.decoder( 2025-09-07T08:07:21.5173851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5174280Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5174666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5175076Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5175511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5175962Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5176373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5176777Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5177229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5177689Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5177856Z 2025-09-07T08:07:21.5177947Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5178166Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5178374Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5178584Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5178796Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5179008Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5179212Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5179423Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5179631Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5179841Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5180049Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5180363Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5180735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5181077Z return mod(**inputs) 2025-09-07T08:07:21.5181453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5181864Z outputs = self.model.decoder( 2025-09-07T08:07:21.5182268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5182673Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5183035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5183428Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5183893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5184363Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5184779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5185196Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5185635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5186165Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5186362Z 2025-09-07T08:07:21.5186473Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5186842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5187169Z return mod(**inputs) 2025-09-07T08:07:21.5188402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5188926Z outputs = self.model.decoder( 2025-09-07T08:07:21.5189409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5189942Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5190340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5190752Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5191210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5191694Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5192174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5192628Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5193090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5193563Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5193732Z 2025-09-07T08:07:21.5193830Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5194060Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5194274Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5194481Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5194691Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5194900Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5195118Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5195342Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5195568Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5195853Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5196099Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5196484Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5196826Z return mod(**inputs) 2025-09-07T08:07:21.5197208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5197627Z outputs = self.model.decoder( 2025-09-07T08:07:21.5198062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5198498Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5198887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5199287Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5199679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5200282Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5200704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5201130Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5201578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5202125Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5202322Z 2025-09-07T08:07:21.5202435Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5202808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5203172Z return mod(**inputs) 2025-09-07T08:07:21.5203569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5204005Z outputs = self.model.decoder( 2025-09-07T08:07:21.5204439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5204857Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5205241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5205635Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5206058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5206497Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5207024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5207482Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5207955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5208446Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5208635Z 2025-09-07T08:07:21.5208720Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5208942Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5209146Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5209358Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5209572Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5209787Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5209986Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5210194Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5210400Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5210606Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5210807Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5211051Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5211424Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5211756Z return mod(**inputs) 2025-09-07T08:07:21.5212130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5212521Z outputs = self.model.decoder( 2025-09-07T08:07:21.5212917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5213314Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5213670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5214033Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5214431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5214901Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5215313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5215741Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5216174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5216719Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5216923Z 2025-09-07T08:07:21.5217034Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5217398Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5217732Z return mod(**inputs) 2025-09-07T08:07:21.5218096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5218499Z outputs = self.model.decoder( 2025-09-07T08:07:21.5218873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5219256Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5219627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5219985Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5220372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5220784Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5221195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5221600Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5222056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5222545Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5222718Z 2025-09-07T08:07:21.5222812Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5223044Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5223304Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5223526Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5223737Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5223944Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5224143Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5224348Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5224555Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5224758Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5224988Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5225364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5225705Z return mod(**inputs) 2025-09-07T08:07:21.5226068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5226448Z outputs = self.model.decoder( 2025-09-07T08:07:21.5226837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5227224Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5227582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5227952Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5228347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5228816Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5229235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5229655Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5230111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5230616Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5230804Z 2025-09-07T08:07:21.5230909Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5231273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5231597Z return mod(**inputs) 2025-09-07T08:07:21.5231961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5232355Z outputs = self.model.decoder( 2025-09-07T08:07:21.5232744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5233133Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5233487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5233852Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5234258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5234677Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5235097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5235509Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5235960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5236421Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5236588Z 2025-09-07T08:07:21.5236671Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5236889Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5237102Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5237306Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5237521Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5237731Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5237940Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5238144Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5238354Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5238563Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5238769Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5239006Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5239370Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5239708Z return mod(**inputs) 2025-09-07T08:07:21.5240102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5240509Z outputs = self.model.decoder( 2025-09-07T08:07:21.5240903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5241292Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5241647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5242009Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5242394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5242849Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5243262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5243690Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5244162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5244774Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5244980Z 2025-09-07T08:07:21.5245291Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5245685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5246037Z return mod(**inputs) 2025-09-07T08:07:21.5246425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5247010Z outputs = self.model.decoder( 2025-09-07T08:07:21.5247452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5247885Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5248241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5248615Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5249014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5249433Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5249845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5250264Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5250707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5251170Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5251340Z 2025-09-07T08:07:21.5251424Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5251641Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5251836Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5252039Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5252245Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5252449Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5252644Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5252848Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5253052Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5253257Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5253489Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5253839Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5254153Z return mod(**inputs) 2025-09-07T08:07:21.5254507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5254880Z outputs = self.model.decoder( 2025-09-07T08:07:21.5255241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5255616Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5255952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5256299Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5256669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5257219Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5257620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5258018Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5258444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5258902Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5259146Z 2025-09-07T08:07:21.5259250Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5259603Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5259918Z return mod(**inputs) 2025-09-07T08:07:21.5260272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5260645Z outputs = self.model.decoder( 2025-09-07T08:07:21.5261012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5261395Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5261736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5262092Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5262466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5262866Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5263258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5263658Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5264078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5264529Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5264691Z 2025-09-07T08:07:21.5264770Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5264980Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5265183Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5265376Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5265576Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5265781Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5265981Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5266172Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5266376Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5266578Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5266777Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5267000Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5267355Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5267676Z return mod(**inputs) 2025-09-07T08:07:21.5268046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5268419Z outputs = self.model.decoder( 2025-09-07T08:07:21.5268794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5269166Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5269505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5269861Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5270231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5270727Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5271122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5271521Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5271950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5272440Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5272625Z 2025-09-07T08:07:21.5272730Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5273085Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5273409Z return mod(**inputs) 2025-09-07T08:07:21.5273773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5274166Z outputs = self.model.decoder( 2025-09-07T08:07:21.5274543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5274919Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5275268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5275627Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5276022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5276429Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5276837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5277247Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5277667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5278115Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5278278Z 2025-09-07T08:07:21.5278360Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5278572Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5278784Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5278982Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5279191Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5279397Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5279604Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5279805Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5280008Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5280219Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5280453Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5280809Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5281144Z return mod(**inputs) 2025-09-07T08:07:21.5281538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5281938Z outputs = self.model.decoder( 2025-09-07T08:07:21.5282329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5282735Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5283093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5283472Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5283875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5284312Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5284749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5285169Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5285644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5286165Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5286362Z 2025-09-07T08:07:21.5286516Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5287005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5287382Z return mod(**inputs) 2025-09-07T08:07:21.5287798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5288198Z outputs = self.model.decoder( 2025-09-07T08:07:21.5288612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5289003Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5289353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5289763Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5290146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5290553Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5290951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5291357Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5291792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5292238Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5292405Z 2025-09-07T08:07:21.5292486Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5292697Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5292905Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5293103Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5293310Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5293516Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5293720Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5293985Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5294192Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5294405Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5294605Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5294843Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5295205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5295533Z return mod(**inputs) 2025-09-07T08:07:21.5295891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5296281Z outputs = self.model.decoder( 2025-09-07T08:07:21.5296661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5297050Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5297387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5297747Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5298131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5298585Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5298994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5299393Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5299849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5300312Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5300527Z 2025-09-07T08:07:21.5300643Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5301000Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5301324Z return mod(**inputs) 2025-09-07T08:07:21.5301675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5302052Z outputs = self.model.decoder( 2025-09-07T08:07:21.5302425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5302803Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5303141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5303501Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5303888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5304303Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5304710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5305125Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5305561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5306022Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5306177Z 2025-09-07T08:07:21.5306262Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5306463Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5306671Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5306877Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5307083Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5307283Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5307489Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5307694Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5307907Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5308112Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5308361Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5308756Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5309108Z return mod(**inputs) 2025-09-07T08:07:21.5309515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5309947Z outputs = self.model.decoder( 2025-09-07T08:07:21.5310361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5310767Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5311127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5311489Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5311888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5312307Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5312770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5313188Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5313618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5314095Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5314290Z 2025-09-07T08:07:21.5314438Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5314810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5315147Z return mod(**inputs) 2025-09-07T08:07:21.5315510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5315906Z outputs = self.model.decoder( 2025-09-07T08:07:21.5316299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5316683Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5317020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5317381Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5317770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5318179Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5318584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5318984Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5319420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5319875Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5320033Z 2025-09-07T08:07:21.5320121Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5320337Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5320544Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5320756Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5320966Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5321178Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5321381Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5321590Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5321799Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5322006Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5322207Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5322448Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5322820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5323152Z return mod(**inputs) 2025-09-07T08:07:21.5323516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5323910Z outputs = self.model.decoder( 2025-09-07T08:07:21.5324297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5324692Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5325050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5325414Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5325808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5326276Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5326731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5327268Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5327745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:07:21.5328271Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:07:21.5328526Z 2025-09-07T08:07:21.5328634Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5328993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5329309Z return mod(**inputs) 2025-09-07T08:07:21.5329675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1864, in forward 2025-09-07T08:07:21.5330062Z outputs = self.model.decoder( 2025-09-07T08:07:21.5330458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:07:21.5330883Z layer_outputs = decoder_layer( 2025-09-07T08:07:21.5331254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:07:21.5331650Z return super().__call__(*args, **kwargs) 2025-09-07T08:07:21.5332068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:07:21.5332487Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:07:21.5332900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:07:21.5333309Z attn_output, attn_weights = attention_interface( 2025-09-07T08:07:21.5333746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:07:21.5334197Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:07:21.5334355Z 2025-09-07T08:07:21.5334445Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5334652Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5334865Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5335076Z cudagraph partition due to non gpu ops 2025-09-07T08:07:21.5335312Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:07:21.5335669Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:07:21.5335988Z return mod(**inputs) 2025-09-07T08:07:21.5336350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1886, in forward 2025-09-07T08:07:21.5336803Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:07:21.5336999Z 2025-09-07T08:07:29.8191008Z Compilation time (from dynamo_timed): 29.974671219 2025-09-07T08:07:29.8405180Z pass 2025-09-07T08:07:29.8405616Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:07:29.8406604Z TIMING: _recursive_pre_grad_passes:0.03662 _recursive_joint_graph_passes:0.38382 _recursive_post_grad_passes:0.0707 linear_unary_template_precompiling:0.01304 async_compile.wait:0.81255 code_gen:7.58524 inductor_compile:22.63768 backend_compile:27.6856 gc:0.00122 entire_frame_compile:29.97467 total_wall_time:29.97467 2025-09-07T08:07:29.8407967Z STATS: call_* op count: 375 | FakeTensorMode.__torch_dispatch__:27757 | FakeTensor.__torch_dispatch__:3096 | ProxyTorchDispatchMode.__torch_dispatch__:7520 2025-09-07T08:07:29.8408538Z Dynamo produced 1 graphs covering 375 ops with 0 graph breaks (0 unique) 2025-09-07T08:07:32.8209604Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:07:32.8210988Z import pynvml # type: ignore[import] 2025-09-07T08:07:35.6311622Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:07:35.6312547Z from pkg_resources import resource_filename 2025-09-07T08:07:36.2959546Z 2025-09-07T08:07:41.0805894Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:07:41.0806262Z loading model: 0it [00:04, ?it/s] 2025-09-07T08:07:41.0806552Z cpu eval MBartForConditionalGeneration 2025-09-07T08:07:44.7123993Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:07:45.3302032Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:07:45.9442995Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:08:33.3884277Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3884974Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3885495Z return mod(**inputs) 2025-09-07T08:08:33.3886013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1436, in forward 2025-09-07T08:08:33.3892719Z decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) 2025-09-07T08:08:33.3893344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 76, in shift_tokens_right 2025-09-07T08:08:33.3893878Z index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) 2025-09-07T08:08:33.3894102Z 2025-09-07T08:08:33.3894202Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3894433Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3894651Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3894868Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3895091Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3895306Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3895515Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3895735Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3895950Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3896159Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3896363Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3896576Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3896788Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3897001Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3897206Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3897419Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3897657Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3897880Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3898087Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3898338Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3898773Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3899125Z return mod(**inputs) 2025-09-07T08:08:33.3899523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3899932Z outputs = self.model( 2025-09-07T08:08:33.3900346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3901037Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3901499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3901902Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3902272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3902644Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3903221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3903687Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3904166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3904619Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3905102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.3905633Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.3905845Z 2025-09-07T08:08:33.3905977Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3906382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3906757Z return mod(**inputs) 2025-09-07T08:08:33.3907168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3907598Z outputs = self.model( 2025-09-07T08:08:33.3908008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3908446Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3908875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3909288Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3909669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3910063Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3910505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3910951Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3911421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3911874Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3912355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.3912852Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.3913033Z 2025-09-07T08:08:33.3913133Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3913359Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3913578Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3913794Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3914002Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3914219Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3914436Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3914654Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3914862Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3915078Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3915294Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3915542Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3915934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3916318Z return mod(**inputs) 2025-09-07T08:08:33.3916698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3917104Z outputs = self.model( 2025-09-07T08:08:33.3917465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3917847Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3918295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3918690Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3919051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3919414Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3919792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3920196Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3920592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3920998Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3921427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.3921901Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.3922091Z 2025-09-07T08:08:33.3922199Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3922563Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3922896Z return mod(**inputs) 2025-09-07T08:08:33.3923259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3923653Z outputs = self.model( 2025-09-07T08:08:33.3924025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3924423Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3924812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3925226Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3925608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3926004Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3926426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3927081Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3927532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3927989Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3928445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.3928900Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.3929064Z 2025-09-07T08:08:33.3929146Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3929367Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3929585Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3929801Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3930014Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3930222Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3930433Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3930647Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3930897Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3931101Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3931342Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3931708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3932048Z return mod(**inputs) 2025-09-07T08:08:33.3932413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3932853Z outputs = self.model( 2025-09-07T08:08:33.3933222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3933631Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3933992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3934372Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3934709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3935058Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3935432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3935811Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3936203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3936599Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3937036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.3937513Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.3937695Z 2025-09-07T08:08:33.3937804Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3938169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3938502Z return mod(**inputs) 2025-09-07T08:08:33.3938869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3939239Z outputs = self.model( 2025-09-07T08:08:33.3939586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3939967Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3940346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3940734Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3941063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3941419Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3941797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3942190Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3942601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3942988Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3943423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.3943874Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.3944030Z 2025-09-07T08:08:33.3944117Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3944322Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3944556Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3944757Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3944961Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3945340Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3945537Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3945738Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3945945Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3946152Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3946423Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3946664Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3947025Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3947356Z return mod(**inputs) 2025-09-07T08:08:33.3947707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3948095Z outputs = self.model( 2025-09-07T08:08:33.3948448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3948829Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3949201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3949569Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3949923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3950282Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3950664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3951056Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3951459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3951858Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3952292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.3952764Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.3952945Z 2025-09-07T08:08:33.3953050Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3953409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3953744Z return mod(**inputs) 2025-09-07T08:08:33.3954110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3954492Z outputs = self.model( 2025-09-07T08:08:33.3954847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3955235Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3955622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3956039Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3956416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3956804Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3957232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3957670Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3958078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3958490Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3958985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.3959445Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.3959608Z 2025-09-07T08:08:33.3959701Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3959919Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3960126Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3960339Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3960596Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3960823Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3961043Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3961277Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3961489Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3961703Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3961937Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3962317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3962659Z return mod(**inputs) 2025-09-07T08:08:33.3963045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3963442Z outputs = self.model( 2025-09-07T08:08:33.3963840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3964283Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3964701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3965127Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3965506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3965907Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3966349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3966796Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3967316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3967776Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3968259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.3968749Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.3968954Z 2025-09-07T08:08:33.3969083Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3969504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3969864Z return mod(**inputs) 2025-09-07T08:08:33.3970283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3970721Z outputs = self.model( 2025-09-07T08:08:33.3971138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3971602Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3972069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3972502Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3972901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3973314Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3973795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3974293Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3974759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3975226Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3975714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.3976238Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.3976409Z 2025-09-07T08:08:33.3976494Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3976707Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3976916Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3977115Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3977322Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3977527Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3977733Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3977931Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3978135Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3978341Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3978547Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3978984Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3979343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3979717Z return mod(**inputs) 2025-09-07T08:08:33.3980083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3980464Z outputs = self.model( 2025-09-07T08:08:33.3980820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3981218Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3981633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3982048Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3982418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3982780Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3983182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3983598Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3983992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3984395Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3984829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.3985305Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.3985498Z 2025-09-07T08:08:33.3985606Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3985987Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3986302Z return mod(**inputs) 2025-09-07T08:08:33.3986671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3987048Z outputs = self.model( 2025-09-07T08:08:33.3987440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3987836Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3988216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3988650Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3989001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3989368Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3989754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3990186Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3990638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3991040Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.3991479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.3991921Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.3992088Z 2025-09-07T08:08:33.3992168Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3992391Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3992600Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3992808Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3993009Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3993224Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3993438Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3993650Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3993855Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3994062Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.3994303Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.3994672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.3994999Z return mod(**inputs) 2025-09-07T08:08:33.3995370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.3995750Z outputs = self.model( 2025-09-07T08:08:33.3996108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.3996494Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.3996879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.3997271Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.3997623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.3997988Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.3998374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.3998786Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.3999190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.3999606Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4000052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4000527Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4000722Z 2025-09-07T08:08:33.4000830Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4001218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4001570Z return mod(**inputs) 2025-09-07T08:08:33.4001962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4002402Z outputs = self.model( 2025-09-07T08:08:33.4002790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4003206Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4003614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4004019Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4004438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4004832Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4005252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4005684Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4006111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4006552Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4007101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4007618Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4007798Z 2025-09-07T08:08:33.4007895Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4008128Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4008374Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4008590Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4008805Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4009009Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4009222Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4009434Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4009653Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4009857Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4010068Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4010306Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4010673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4011000Z return mod(**inputs) 2025-09-07T08:08:33.4011374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4011767Z outputs = self.model( 2025-09-07T08:08:33.4012136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4012536Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4012906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4013297Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4013655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4014026Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4014416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4014825Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4015238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4015645Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4016081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4016547Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4016791Z 2025-09-07T08:08:33.4016896Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4017254Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4017580Z return mod(**inputs) 2025-09-07T08:08:33.4017954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4018339Z outputs = self.model( 2025-09-07T08:08:33.4018744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4032554Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4033030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4033454Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4033827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4034207Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4034615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4035045Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4035466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4036019Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4036478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4036946Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4037164Z 2025-09-07T08:08:33.4037267Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4037487Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4037694Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4037903Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4038110Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4038316Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4038514Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4038720Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4038965Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4039170Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4039409Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4039780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4040115Z return mod(**inputs) 2025-09-07T08:08:33.4040485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4040878Z outputs = self.model( 2025-09-07T08:08:33.4041257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4041656Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4042051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4042442Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4042797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4043177Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4043576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4043991Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4044400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4044932Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4045608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4046141Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4046348Z 2025-09-07T08:08:33.4046477Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4047045Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4047424Z return mod(**inputs) 2025-09-07T08:08:33.4047846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4048319Z outputs = self.model( 2025-09-07T08:08:33.4048678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4049054Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4049430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4049813Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4050161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4050518Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4050894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4051299Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4051699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4052111Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4052538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4052976Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4053145Z 2025-09-07T08:08:33.4053228Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4053439Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4053645Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4053840Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4054042Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4054244Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4054447Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4054637Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4054838Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4055037Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4055236Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4055457Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4055810Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4056128Z return mod(**inputs) 2025-09-07T08:08:33.4056484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4056850Z outputs = self.model( 2025-09-07T08:08:33.4057196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4057571Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4057938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4058309Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4058643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4059043Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4059420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4059812Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4060201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4060595Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4061058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4061538Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4061724Z 2025-09-07T08:08:33.4061839Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4062201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4062520Z return mod(**inputs) 2025-09-07T08:08:33.4062885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4063272Z outputs = self.model( 2025-09-07T08:08:33.4063640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4064019Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4064400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4064785Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4065131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4065492Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4065872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4066275Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4066674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4067080Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4067521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4067970Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4068139Z 2025-09-07T08:08:33.4068221Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4068433Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4068643Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4068850Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4069049Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4069259Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4069466Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4069674Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4069875Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4070081Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4070316Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4070675Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4070993Z return mod(**inputs) 2025-09-07T08:08:33.4071359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4071739Z outputs = self.model( 2025-09-07T08:08:33.4072101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4072477Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4072891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4073270Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4073618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4073976Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4074357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4074786Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4075188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4075596Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4076034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4076512Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4076705Z 2025-09-07T08:08:33.4076810Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4077169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4077498Z return mod(**inputs) 2025-09-07T08:08:33.4077862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4078248Z outputs = self.model( 2025-09-07T08:08:33.4078612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4079002Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4079382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4079763Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4080110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4080479Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4080878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4081287Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4081691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4082109Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4082560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4083022Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4083189Z 2025-09-07T08:08:33.4083282Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4083494Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4083709Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4083919Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4084129Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4084336Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4084550Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4084764Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4084977Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4085182Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4085408Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4085650Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4086020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4086374Z return mod(**inputs) 2025-09-07T08:08:33.4086827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4087337Z outputs = self.model( 2025-09-07T08:08:33.4087751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4088187Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4088572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4089002Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4089371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4089749Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4090141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4090563Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4090976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4091394Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4091849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4092330Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4092528Z 2025-09-07T08:08:33.4092636Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4093003Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4093337Z return mod(**inputs) 2025-09-07T08:08:33.4093711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4094103Z outputs = self.model( 2025-09-07T08:08:33.4094475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1248, in forward 2025-09-07T08:08:33.4094875Z encoder_outputs = self.encoder( 2025-09-07T08:08:33.4095265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 861, in forward 2025-09-07T08:08:33.4095653Z layer_outputs = encoder_layer( 2025-09-07T08:08:33.4096017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4096391Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4096779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 321, in forward 2025-09-07T08:08:33.4097181Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:08:33.4097573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4097994Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4098430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4098879Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4099038Z 2025-09-07T08:08:33.4099124Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4099331Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4099545Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4099751Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4099956Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4100155Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4100361Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4100567Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4100790Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4101005Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4101240Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4101602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4101925Z return mod(**inputs) 2025-09-07T08:08:33.4102284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4102671Z outputs = self.model( 2025-09-07T08:08:33.4103068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4103455Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4103835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4104210Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4104559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4104917Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4105305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4105711Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4106124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4106531Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4106966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4107436Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4107621Z 2025-09-07T08:08:33.4107727Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4108089Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4108423Z return mod(**inputs) 2025-09-07T08:08:33.4108772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4109143Z outputs = self.model( 2025-09-07T08:08:33.4109489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4109868Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4110239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4110613Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4110952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4111299Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4111675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4112076Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4112483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4112879Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4113316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4113770Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4113929Z 2025-09-07T08:08:33.4114028Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4114235Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4114432Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4114664Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4114865Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4115070Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4115269Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4115479Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4115716Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4116076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4116432Z return mod(**inputs) 2025-09-07T08:08:33.4116787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4117157Z outputs = self.model( 2025-09-07T08:08:33.4117509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4117888Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4118250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4118624Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4118971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4119322Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4119693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4120097Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4120501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4120897Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4121319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4121775Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4121963Z 2025-09-07T08:08:33.4122069Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4122426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4122749Z return mod(**inputs) 2025-09-07T08:08:33.4123117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4123479Z outputs = self.model( 2025-09-07T08:08:33.4123829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4124205Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4124571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4124938Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4125295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4125650Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4126036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4126450Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4126932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4127361Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4127810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4128275Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4128477Z 2025-09-07T08:08:33.4128569Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4128780Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4128995Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4129206Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4129418Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4129621Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4129833Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4130093Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4130304Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4130502Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4130711Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4130947Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4131305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4131622Z return mod(**inputs) 2025-09-07T08:08:33.4131990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4132373Z outputs = self.model( 2025-09-07T08:08:33.4132737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4133120Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4133493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4133877Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4134225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4134597Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4134976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4135385Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4135792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4136196Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4136638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4137103Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4137292Z 2025-09-07T08:08:33.4137396Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4137752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4138077Z return mod(**inputs) 2025-09-07T08:08:33.4138440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4138816Z outputs = self.model( 2025-09-07T08:08:33.4139175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4139561Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4139938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4140322Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4140664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4141023Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4141410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4141811Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4142248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4142647Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4143081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4143531Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4143689Z 2025-09-07T08:08:33.4143776Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4144009Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4144220Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4144423Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4144632Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4144830Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4145183Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4145436Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4145800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4146121Z return mod(**inputs) 2025-09-07T08:08:33.4146488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4146882Z outputs = self.model( 2025-09-07T08:08:33.4147244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4147632Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4147995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4148380Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4148717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4149070Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4149439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4149854Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4150268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4150681Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4151106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4151557Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4151740Z 2025-09-07T08:08:33.4151842Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4152188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4152502Z return mod(**inputs) 2025-09-07T08:08:33.4152855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4153219Z outputs = self.model( 2025-09-07T08:08:33.4153572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4153946Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4154314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4154685Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4155013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4155363Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4155738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4156217Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4156619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4157019Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4157443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4157955Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4158110Z 2025-09-07T08:08:33.4158196Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4158395Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4158601Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4158801Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4159003Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4159197Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4159399Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4159597Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4159796Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4159988Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4160186Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4160413Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4160765Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4161074Z return mod(**inputs) 2025-09-07T08:08:33.4161432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4161852Z outputs = self.model( 2025-09-07T08:08:33.4162226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4162629Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4163011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4163404Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4163761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4164133Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4164539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4164950Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4165376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4165835Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4166306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4166818Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4167076Z 2025-09-07T08:08:33.4167195Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4167592Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4167952Z return mod(**inputs) 2025-09-07T08:08:33.4168349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4168755Z outputs = self.model( 2025-09-07T08:08:33.4169186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4169605Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4170045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4170481Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4170839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4171221Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4171617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4172055Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4172461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4172884Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4173330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4173792Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4173955Z 2025-09-07T08:08:33.4174044Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4174253Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4174463Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4174671Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4174880Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4175081Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4175294Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4175504Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4175749Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4176116Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4176438Z return mod(**inputs) 2025-09-07T08:08:33.4176816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4177195Z outputs = self.model( 2025-09-07T08:08:33.4177557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4177936Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4178314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4178697Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4179042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4179400Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4179776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4180190Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4180616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4181031Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4181465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4181944Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4182137Z 2025-09-07T08:08:33.4182247Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4182611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4182940Z return mod(**inputs) 2025-09-07T08:08:33.4183304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4183695Z outputs = self.model( 2025-09-07T08:08:33.4184084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4184471Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4184850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4185228Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4185576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4185969Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4186361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4186769Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4187184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4187594Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4188032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4188481Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4188641Z 2025-09-07T08:08:33.4188721Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4188930Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4189139Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4189349Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4189543Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4189746Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4189949Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4190153Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4190354Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4190550Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4190786Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4191143Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4191472Z return mod(**inputs) 2025-09-07T08:08:33.4191831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4192212Z outputs = self.model( 2025-09-07T08:08:33.4192575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4192962Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4193332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4193719Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4194066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4194434Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4194824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4195227Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4195634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4196039Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4196477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4196952Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4197133Z 2025-09-07T08:08:33.4197238Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4197634Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4197968Z return mod(**inputs) 2025-09-07T08:08:33.4198338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4198726Z outputs = self.model( 2025-09-07T08:08:33.4199117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4199548Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4199940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4200335Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4200683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4201055Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4201478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4201929Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4202384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4202826Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4203303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4203791Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4203968Z 2025-09-07T08:08:33.4204062Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4204289Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4204511Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4204733Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4204955Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4205181Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4205395Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4205616Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4205869Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4206266Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4206614Z return mod(**inputs) 2025-09-07T08:08:33.4207153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4207587Z outputs = self.model( 2025-09-07T08:08:33.4207982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4208378Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4208762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4209148Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4209499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4209863Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4210249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4210678Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4211102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4211518Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4211965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4212480Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4212674Z 2025-09-07T08:08:33.4212781Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4213157Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4213494Z return mod(**inputs) 2025-09-07T08:08:33.4213868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4214283Z outputs = self.model( 2025-09-07T08:08:33.4214659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4215055Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4215443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4215841Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4216193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4216555Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4216946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4217369Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4217789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4218206Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4218655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4219115Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4219276Z 2025-09-07T08:08:33.4219367Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4219574Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4219779Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4219985Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4220187Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4220382Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4220584Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4220789Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4220990Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4221186Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4221421Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4221777Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4222101Z return mod(**inputs) 2025-09-07T08:08:33.4222454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4222842Z outputs = self.model( 2025-09-07T08:08:33.4223202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4223584Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4223957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4224331Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4224679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4225043Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4225430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4225830Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4226272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4226682Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4227132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4227610Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4227797Z 2025-09-07T08:08:33.4227939Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4228307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4228629Z return mod(**inputs) 2025-09-07T08:08:33.4228993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4229383Z outputs = self.model( 2025-09-07T08:08:33.4229748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4230143Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4230529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4230923Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4231277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4231638Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4232033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4232440Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4232850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4233263Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4233710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4234167Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4234329Z 2025-09-07T08:08:33.4234419Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4234635Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4234841Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4235057Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4235269Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4235476Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4235677Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4235885Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4236124Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4236488Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4236810Z return mod(**inputs) 2025-09-07T08:08:33.4237179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4237571Z outputs = self.model( 2025-09-07T08:08:33.4237940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4238332Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4238715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4239168Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4239524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4239894Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4240331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4240786Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4241224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4241641Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4242117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4242623Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4242829Z 2025-09-07T08:08:33.4242944Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4243326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4243677Z return mod(**inputs) 2025-09-07T08:08:33.4244071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4244477Z outputs = self.model( 2025-09-07T08:08:33.4244871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4245494Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4245916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4246330Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4246709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4247170Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4247604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4248081Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4248475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4248867Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4249292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4249737Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4249892Z 2025-09-07T08:08:33.4249977Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4250176Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4250379Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4250580Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4250785Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4250981Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4251188Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4251389Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4251590Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4251795Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4251996Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4252220Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4252563Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4252884Z return mod(**inputs) 2025-09-07T08:08:33.4253232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4253600Z outputs = self.model( 2025-09-07T08:08:33.4253945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4254361Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4254744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4255113Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4255448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4255795Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4256205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4256597Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4257002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4257388Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4257810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4258271Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4258446Z 2025-09-07T08:08:33.4258550Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4258899Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4259213Z return mod(**inputs) 2025-09-07T08:08:33.4259576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4259933Z outputs = self.model( 2025-09-07T08:08:33.4260285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4260661Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4261042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4261410Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4261733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4262075Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4262441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4262827Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4263208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4263599Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4264028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4264143Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4264150Z 2025-09-07T08:08:33.4264232Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4264312Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4264395Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4264470Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4264555Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4264630Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4264705Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4264817Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4265019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4265092Z return mod(**inputs) 2025-09-07T08:08:33.4265353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4265419Z outputs = self.model( 2025-09-07T08:08:33.4265704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4265777Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4266023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4266095Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4266307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4266427Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4266671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4266785Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4267026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4267123Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4267409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4267534Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4267540Z 2025-09-07T08:08:33.4267649Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4267843Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4267914Z return mod(**inputs) 2025-09-07T08:08:33.4268158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4268225Z outputs = self.model( 2025-09-07T08:08:33.4268480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4268555Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4268803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4268874Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4269084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4269172Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4269413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4269526Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4269765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4269865Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4270143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4270248Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4270251Z 2025-09-07T08:08:33.4270337Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270413Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270491Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270564Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270640Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270721Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270796Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270879Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4270953Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4271028Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4271111Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4271233Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4271449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4271515Z return mod(**inputs) 2025-09-07T08:08:33.4271763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4271841Z outputs = self.model( 2025-09-07T08:08:33.4272193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4272275Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4272521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4272590Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4272806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4272890Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4273135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4273233Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4273474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4273575Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4273858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4273987Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4273991Z 2025-09-07T08:08:33.4274088Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4274285Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4274352Z return mod(**inputs) 2025-09-07T08:08:33.4274596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4274679Z outputs = self.model( 2025-09-07T08:08:33.4274922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4275005Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4275251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4275323Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4275544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4275622Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4275867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4275965Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4276212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4276306Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4276581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4276696Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4276699Z 2025-09-07T08:08:33.4276776Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4276857Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4276931Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4277004Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4277085Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4277200Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4277280Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4277354Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4277454Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4277655Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4277719Z return mod(**inputs) 2025-09-07T08:08:33.4278004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4278071Z outputs = self.model( 2025-09-07T08:08:33.4278313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4278395Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4278638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4278720Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4278927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4279005Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4279248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4279353Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4279603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4279695Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4279975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4280102Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4280106Z 2025-09-07T08:08:33.4280205Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4280407Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4280472Z return mod(**inputs) 2025-09-07T08:08:33.4280730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4280796Z outputs = self.model( 2025-09-07T08:08:33.4281047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4281127Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4281373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4281451Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4281669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4281757Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4282001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4282106Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4282360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4282455Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4282742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4282847Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4282851Z 2025-09-07T08:08:33.4282929Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283046Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283123Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283208Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283284Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283360Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283443Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283519Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283601Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283703Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4283809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4284011Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4284077Z return mod(**inputs) 2025-09-07T08:08:33.4284334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4284405Z outputs = self.model( 2025-09-07T08:08:33.4284651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4284734Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4284982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4285064Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4285288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4285378Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4285632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4285734Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4285997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4286097Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4286394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4286526Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4286530Z 2025-09-07T08:08:33.4286636Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4286851Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4287038Z return mod(**inputs) 2025-09-07T08:08:33.4287332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4287411Z outputs = self.model( 2025-09-07T08:08:33.4287698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4287788Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4288066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4288159Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4288406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4288513Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4288759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4288858Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4289109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4289204Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4289526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4289632Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4289636Z 2025-09-07T08:08:33.4289720Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4289796Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4289870Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4289950Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4290055Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4290131Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4290213Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4290289Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4290399Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4290597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4290669Z return mod(**inputs) 2025-09-07T08:08:33.4290931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4291001Z outputs = self.model( 2025-09-07T08:08:33.4291255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4291331Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4291590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4291663Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4291882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4291972Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4292232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4292346Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4292587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4292681Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4292975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4293104Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4293108Z 2025-09-07T08:08:33.4293220Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4293418Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4293490Z return mod(**inputs) 2025-09-07T08:08:33.4293741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4293811Z outputs = self.model( 2025-09-07T08:08:33.4294065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4294140Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4294397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4294471Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4294687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4294774Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4295018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4295153Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4295415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4295510Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4295803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4295909Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4295912Z 2025-09-07T08:08:33.4296038Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296118Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296199Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296272Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296347Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296428Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296504Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296588Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296663Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4296737Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4299328Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4299541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4299609Z return mod(**inputs) 2025-09-07T08:08:33.4299884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4299955Z outputs = self.model( 2025-09-07T08:08:33.4300214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4300293Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4300550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4300636Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4300865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4300976Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4301241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4301343Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4301609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4301709Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4302009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4302155Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4302161Z 2025-09-07T08:08:33.4302265Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4302469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4302538Z return mod(**inputs) 2025-09-07T08:08:33.4302795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4302864Z outputs = self.model( 2025-09-07T08:08:33.4303113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4303197Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4303444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4303525Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4303768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4303875Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4304128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4304224Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4304484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4304597Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4304886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4304995Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4304998Z 2025-09-07T08:08:33.4305085Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4305165Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4305240Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4305323Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4305398Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4306374Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4306465Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4306539Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4306647Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4306842Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4306908Z return mod(**inputs) 2025-09-07T08:08:33.4307158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4307225Z outputs = self.model( 2025-09-07T08:08:33.4307473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4307550Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4307799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4307872Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4308083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4308172Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4308419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4308532Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4308777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4308873Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4309157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4309283Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4309288Z 2025-09-07T08:08:33.4309397Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4309587Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4309659Z return mod(**inputs) 2025-09-07T08:08:33.4309897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4309964Z outputs = self.model( 2025-09-07T08:08:33.4310210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4310283Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4310564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4310633Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4310846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4310929Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4311189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4311301Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4311544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4311636Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4311917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4312023Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4312026Z 2025-09-07T08:08:33.4312111Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312212Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312295Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312370Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312443Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312525Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312599Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312674Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312755Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312828Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4312908Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4313009Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4313202Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4313273Z return mod(**inputs) 2025-09-07T08:08:33.4313568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4313641Z outputs = self.model( 2025-09-07T08:08:33.4313883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4313963Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4314202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4314274Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4314495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4314574Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4314823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4314920Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4315160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4315262Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4315538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4315673Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4315677Z 2025-09-07T08:08:33.4315779Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4315976Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4316076Z return mod(**inputs) 2025-09-07T08:08:33.4316333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4316409Z outputs = self.model( 2025-09-07T08:08:33.4316652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4316733Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4316986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4317058Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4317278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4317356Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4317602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4317701Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4317938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4318054Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4318329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4318444Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4318447Z 2025-09-07T08:08:33.4318523Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4318608Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4318680Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4318751Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4318832Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4318905Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4318985Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4319148Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4319348Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4319414Z return mod(**inputs) 2025-09-07T08:08:33.4319663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4319729Z outputs = self.model( 2025-09-07T08:08:33.4319976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4320049Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4320287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4320365Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4320579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4320668Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4320923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4321033Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4321302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4321397Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4321684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4321812Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4321816Z 2025-09-07T08:08:33.4321929Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4322168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4322232Z return mod(**inputs) 2025-09-07T08:08:33.4322493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4322562Z outputs = self.model( 2025-09-07T08:08:33.4322822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4322915Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4323172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4323253Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4323476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4323569Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4323824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4323935Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4324214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4324314Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4324609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4324718Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4324721Z 2025-09-07T08:08:33.4324808Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4324887Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4324963Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325051Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325129Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325210Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325286Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325363Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325447Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325523Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325599Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4325712Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4325913Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4325989Z return mod(**inputs) 2025-09-07T08:08:33.4326243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4326320Z outputs = self.model( 2025-09-07T08:08:33.4326577Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4326653Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4327041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4327127Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4327360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4327450Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4327718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4327836Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4328105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4328251Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4328535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4328673Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4328677Z 2025-09-07T08:08:33.4328783Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4329002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4329081Z return mod(**inputs) 2025-09-07T08:08:33.4329327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4329403Z outputs = self.model( 2025-09-07T08:08:33.4329650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4329727Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4329982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4330053Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4330293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4330375Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4330624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4330729Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4330978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4331081Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4331372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4331489Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4331493Z 2025-09-07T08:08:33.4331575Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4331657Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4331752Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4331830Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4331926Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4332002Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4332078Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4332161Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4332264Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4332467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4332535Z return mod(**inputs) 2025-09-07T08:08:33.4332786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4332862Z outputs = self.model( 2025-09-07T08:08:33.4333111Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4333193Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4333441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4333512Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4333741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4333822Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4334073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4334210Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4334457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4334563Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4334847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4334998Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4335002Z 2025-09-07T08:08:33.4335106Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4335306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4335371Z return mod(**inputs) 2025-09-07T08:08:33.4335620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4335700Z outputs = self.model( 2025-09-07T08:08:33.4335945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4336044Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4336290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4336364Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4336587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4336667Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4336919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4337023Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4337277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4337371Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4337652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4337763Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4337767Z 2025-09-07T08:08:33.4337846Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4337929Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338006Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338081Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338164Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338239Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338320Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338395Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338470Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338552Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4338656Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4338861Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4338926Z return mod(**inputs) 2025-09-07T08:08:33.4339175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4339253Z outputs = self.model( 2025-09-07T08:08:33.4339501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4339582Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4339829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4339931Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4340155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4340234Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4340489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4340588Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4340855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4340951Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4341234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4341371Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4341377Z 2025-09-07T08:08:33.4341481Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4341688Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4341755Z return mod(**inputs) 2025-09-07T08:08:33.4342026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4342103Z outputs = self.model( 2025-09-07T08:08:33.4342352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4342434Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4342683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4342764Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4342980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4343064Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4343316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 415, in forward 2025-09-07T08:08:33.4343414Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:08:33.4343677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4343772Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4344053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4344175Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4344179Z 2025-09-07T08:08:33.4344255Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344337Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344414Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344486Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344569Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344642Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344723Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344795Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4344895Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4345279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4345350Z return mod(**inputs) 2025-09-07T08:08:33.4345604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4345672Z outputs = self.model( 2025-09-07T08:08:33.4345914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4346070Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4346315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4346391Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4346607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4346693Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4346962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4347070Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4347318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4347412Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4347694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:08:33.4347822Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:08:33.4347826Z 2025-09-07T08:08:33.4347952Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4348154Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4348218Z return mod(**inputs) 2025-09-07T08:08:33.4348470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1438, in forward 2025-09-07T08:08:33.4348536Z outputs = self.model( 2025-09-07T08:08:33.4348782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1266, in forward 2025-09-07T08:08:33.4348854Z decoder_outputs = self.decoder( 2025-09-07T08:08:33.4349092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1109, in forward 2025-09-07T08:08:33.4349174Z layer_outputs = decoder_layer( 2025-09-07T08:08:33.4349391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:08:33.4349476Z return super().__call__(*args, **kwargs) 2025-09-07T08:08:33.4349714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 432, in forward 2025-09-07T08:08:33.4349821Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:08:33.4350069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 263, in forward 2025-09-07T08:08:33.4350161Z attn_output, attn_weights = attention_interface( 2025-09-07T08:08:33.4350442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:08:33.4350548Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:08:33.4350553Z 2025-09-07T08:08:33.4350635Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4350708Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4350783Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4350867Z cudagraph partition due to non gpu ops 2025-09-07T08:08:33.4350968Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:08:33.4351168Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:08:33.4351233Z return mod(**inputs) 2025-09-07T08:08:33.4351476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mbart/modeling_mbart.py", line 1461, in forward 2025-09-07T08:08:33.4351645Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:08:33.4351649Z 2025-09-07T08:08:52.6239858Z Compilation time (from dynamo_timed): 64.685303671 2025-09-07T08:08:52.6381368Z pass 2025-09-07T08:08:52.6387993Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:08:52.6393778Z TIMING: _recursive_pre_grad_passes:0.09234 _recursive_joint_graph_passes:0.86587 _recursive_post_grad_passes:0.16661 linear_unary_template_precompiling:0.03517 async_compile.wait:0.79968 code_gen:18.27896 inductor_compile:44.81482 backend_compile:58.71208 gc:0.00054 entire_frame_compile:64.6853 total_wall_time:64.6853 2025-09-07T08:08:52.6395294Z STATS: call_* op count: 988 | FakeTensorMode.__torch_dispatch__:70584 | FakeTensor.__torch_dispatch__:7670 | ProxyTorchDispatchMode.__torch_dispatch__:19302 2025-09-07T08:08:52.6395847Z Dynamo produced 1 graphs covering 988 ops with 0 graph breaks (0 unique) 2025-09-07T08:08:56.0139501Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:08:56.0140404Z import pynvml # type: ignore[import] 2025-09-07T08:08:58.6334992Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:08:58.6335922Z from pkg_resources import resource_filename 2025-09-07T08:08:59.3430973Z 2025-09-07T08:09:01.7593322Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:09:01.7598886Z loading model: 0it [00:02, ?it/s] 2025-09-07T08:09:01.7602560Z cpu eval MT5ForConditionalGeneration 2025-09-07T08:09:02.3752089Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:09:02.7037598Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:09:03.0382030Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:09:28.5889330Z Autotune Choices Stats: 2025-09-07T08:09:28.5889816Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.00919700005397317} 2025-09-07T08:09:28.5901839Z AUTOTUNE linear_unary(128x512, 384x512) 2025-09-07T08:09:28.5902121Z strides: [512, 1], [1, 0] 2025-09-07T08:09:28.5902342Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:09:28.5902605Z cpp_CppMicroGemmAMX_0 0.0092 ms 100.0% 2025-09-07T08:09:28.5902847Z _linear_pointwise 0.0597 ms 15.4% 2025-09-07T08:09:28.5903261Z SingleProcess AUTOTUNE benchmarking takes 0.2545 seconds and 1.3631 seconds precompiling for 2 choices 2025-09-07T08:09:30.7148681Z Autotune Choices Stats: 2025-09-07T08:09:30.7149165Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_3", "best_time": 0.033306500199614675} 2025-09-07T08:09:30.7163246Z AUTOTUNE bmm(6x128x128, 6x128x64) 2025-09-07T08:09:30.7163544Z strides: [16384, 128, 1], [64, 384, 1] 2025-09-07T08:09:30.7163802Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:09:30.7164054Z cpp_CppMicroGemmAMX_3 0.0333 ms 100.0% 2025-09-07T08:09:30.7164314Z bmm 0.2687 ms 12.4% 2025-09-07T08:09:30.7164728Z SingleProcess AUTOTUNE benchmarking takes 0.2576 seconds and 1.3939 seconds precompiling for 2 choices 2025-09-07T08:09:32.4605535Z Autotune Choices Stats: 2025-09-07T08:09:32.4606110Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.008616999821242644} 2025-09-07T08:09:32.4626249Z AUTOTUNE linear_unary(128x384, 512x384) 2025-09-07T08:09:32.4626545Z strides: [384, 1], [1, 0] 2025-09-07T08:09:32.4626783Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:09:32.4627037Z cpp_CppMicroGemmAMX_4 0.0086 ms 100.0% 2025-09-07T08:09:32.4627272Z _linear_pointwise 0.0652 ms 13.2% 2025-09-07T08:09:32.4627984Z SingleProcess AUTOTUNE benchmarking takes 0.2535 seconds and 1.3771 seconds precompiling for 2 choices 2025-09-07T08:09:34.7007187Z Autotune Choices Stats: 2025-09-07T08:09:34.7007743Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_11", "best_time": 0.011760999768739566} 2025-09-07T08:09:34.7020438Z AUTOTUNE linear_unary(128x512, 1024x512) 2025-09-07T08:09:34.7020732Z strides: [512, 1], [1, 0] 2025-09-07T08:09:34.7020969Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:09:34.7021484Z cpp_CppMicroGemmAMX_11 0.0118 ms 100.0% 2025-09-07T08:09:34.7021758Z _linear_pointwise 0.0605 ms 19.5% 2025-09-07T08:09:34.7022143Z SingleProcess AUTOTUNE benchmarking takes 0.2550 seconds and 1.3351 seconds precompiling for 2 choices 2025-09-07T08:09:36.5014824Z Autotune Choices Stats: 2025-09-07T08:09:36.5015465Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_13", "best_time": 0.01119699982154998} 2025-09-07T08:09:36.5027037Z AUTOTUNE linear_unary(128x1024, 512x1024) 2025-09-07T08:09:36.5027366Z strides: [1024, 1], [1, 0] 2025-09-07T08:09:36.5027645Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:09:36.5027959Z cpp_CppMicroGemmAMX_13 0.0112 ms 100.0% 2025-09-07T08:09:36.5028277Z _linear_pointwise 0.0691 ms 16.2% 2025-09-07T08:09:36.5029146Z SingleProcess AUTOTUNE benchmarking takes 0.2597 seconds and 1.3839 seconds precompiling for 2 choices 2025-09-07T08:09:51.4312535Z Autotune Choices Stats: 2025-09-07T08:09:51.4314818Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_168", "best_time": 3.612366499964992} 2025-09-07T08:09:51.4318953Z AUTOTUNE linear_unary(128x512, 250112x512) 2025-09-07T08:09:51.4320028Z strides: [512, 1], [1, 0] 2025-09-07T08:09:51.4320525Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:09:51.4321105Z cpp_CppMicroGemmAMX_168 3.6124 ms 100.0% 2025-09-07T08:09:51.4321351Z _linear_pointwise 8.0945 ms 44.6% 2025-09-07T08:09:51.4321748Z SingleProcess AUTOTUNE benchmarking takes 1.0133 seconds and 1.3292 seconds precompiling for 2 choices 2025-09-07T08:09:52.7225946Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7229771Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7230239Z return mod(**inputs) 2025-09-07T08:09:52.7230678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7231110Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7231549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7231986Z layer_outputs = layer_module( 2025-09-07T08:09:52.7232336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7232709Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7233103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7233510Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7233886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7234276Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7234669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 421, in forward 2025-09-07T08:09:52.7235061Z position_bias = position_bias + causal_mask 2025-09-07T08:09:52.7235210Z 2025-09-07T08:09:52.7235304Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7235518Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7235763Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7236195Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7236933Z return mod(**inputs) 2025-09-07T08:09:52.7237289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7237720Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7238102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7238473Z layer_outputs = layer_module( 2025-09-07T08:09:52.7238868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7239219Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7239593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7239972Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7240459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7240862Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7241270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7241815Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7242043Z 2025-09-07T08:09:52.7242129Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7242363Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7242586Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7242844Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7243243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7243609Z return mod(**inputs) 2025-09-07T08:09:52.7244002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7244410Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7244820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7245517Z layer_outputs = layer_module( 2025-09-07T08:09:52.7245908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7246299Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7246723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7247235Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7247650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7248087Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7248507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7248985Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7249180Z 2025-09-07T08:09:52.7249288Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7249655Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7249988Z return mod(**inputs) 2025-09-07T08:09:52.7250332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7250717Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7251095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7251478Z layer_outputs = layer_module( 2025-09-07T08:09:52.7251829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7252284Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7252663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7253042Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7253417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7253786Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7254200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7254620Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7254783Z 2025-09-07T08:09:52.7254871Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7255077Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7255290Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7255528Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7255889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7256214Z return mod(**inputs) 2025-09-07T08:09:52.7256584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7256955Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7257331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7257707Z layer_outputs = layer_module( 2025-09-07T08:09:52.7258052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7258421Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7258801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7259196Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7259569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7259946Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7260331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7260752Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7260937Z 2025-09-07T08:09:52.7261060Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7261447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7261786Z return mod(**inputs) 2025-09-07T08:09:52.7262145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7262528Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7262916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7263282Z layer_outputs = layer_module( 2025-09-07T08:09:52.7263630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7263996Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7264371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7264749Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7265143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7265521Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7265899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7266399Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7266610Z 2025-09-07T08:09:52.7266697Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7266906Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7267122Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7267544Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7267894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7268228Z return mod(**inputs) 2025-09-07T08:09:52.7268595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7268988Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7269375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7269772Z layer_outputs = layer_module( 2025-09-07T08:09:52.7270124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7270509Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7270916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7271297Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7271672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7272056Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7272432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7272862Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7273026Z 2025-09-07T08:09:52.7273142Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7273512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7273855Z return mod(**inputs) 2025-09-07T08:09:52.7274208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7274584Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7274963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7275346Z layer_outputs = layer_module( 2025-09-07T08:09:52.7275694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7276059Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7276438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7276813Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7277192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7277573Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7277957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7278366Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7278525Z 2025-09-07T08:09:52.7278606Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7278821Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7279032Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7279244Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7279445Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7279650Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7279884Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7280280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7280603Z return mod(**inputs) 2025-09-07T08:09:52.7280947Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7281319Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7281687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7282098Z layer_outputs = layer_module( 2025-09-07T08:09:52.7282435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7282791Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7283162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7283542Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7283915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7284284Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7284688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7285124Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7285308Z 2025-09-07T08:09:52.7285423Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7285790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7286117Z return mod(**inputs) 2025-09-07T08:09:52.7286489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7287170Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7287619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7288018Z layer_outputs = layer_module( 2025-09-07T08:09:52.7288400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7288765Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7289170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7289581Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7289984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7290396Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7290803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7291299Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7291527Z 2025-09-07T08:09:52.7291624Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7291851Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7292080Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7292336Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7292724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7293071Z return mod(**inputs) 2025-09-07T08:09:52.7293455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7293915Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7294315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7294752Z layer_outputs = layer_module( 2025-09-07T08:09:52.7295966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7296368Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7296752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7297133Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7297531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7297909Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7298285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7298703Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7298864Z 2025-09-07T08:09:52.7298978Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7299333Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7299659Z return mod(**inputs) 2025-09-07T08:09:52.7300028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7300407Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7300772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7301145Z layer_outputs = layer_module( 2025-09-07T08:09:52.7301490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7301853Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7302228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7302606Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7302980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7303360Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7303740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7304146Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7304306Z 2025-09-07T08:09:52.7304388Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7304601Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7304810Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7305014Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7305212Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7305446Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7305805Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7306132Z return mod(**inputs) 2025-09-07T08:09:52.7306474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7306851Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7307220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7307591Z layer_outputs = layer_module( 2025-09-07T08:09:52.7307938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7308293Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7308670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7309082Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7309513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7309876Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7310246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7310676Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7310854Z 2025-09-07T08:09:52.7310966Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7311343Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7311661Z return mod(**inputs) 2025-09-07T08:09:52.7312021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7312390Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7312749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7313115Z layer_outputs = layer_module( 2025-09-07T08:09:52.7313444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7313813Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7314180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7314552Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7314910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7315282Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7315651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7316092Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7316296Z 2025-09-07T08:09:52.7316382Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7316585Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7316789Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7317022Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7317373Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7317681Z return mod(**inputs) 2025-09-07T08:09:52.7318029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7318396Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7318786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7319149Z layer_outputs = layer_module( 2025-09-07T08:09:52.7319488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7319853Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7320243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7320630Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7321017Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7321402Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7321786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7322200Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7322369Z 2025-09-07T08:09:52.7322483Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7322845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7323218Z return mod(**inputs) 2025-09-07T08:09:52.7323572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7323961Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7324345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7324717Z layer_outputs = layer_module( 2025-09-07T08:09:52.7325090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7325465Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7325848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7326239Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7326624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7327210Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7327708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7328171Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7328348Z 2025-09-07T08:09:52.7328437Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7328678Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7328896Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7329109Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7329313Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7329568Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7329932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7330268Z return mod(**inputs) 2025-09-07T08:09:52.7330623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7330999Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7331376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7331754Z layer_outputs = layer_module( 2025-09-07T08:09:52.7332110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7332469Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7332851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7333235Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7333620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7334012Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7334388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7334824Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7335015Z 2025-09-07T08:09:52.7335122Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7335485Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7335870Z return mod(**inputs) 2025-09-07T08:09:52.7336222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7336607Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7337005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7337449Z layer_outputs = layer_module( 2025-09-07T08:09:52.7337797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7338175Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7338558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7338946Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7339358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7339729Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7340113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7340582Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7340791Z 2025-09-07T08:09:52.7340880Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7341091Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7341293Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7341531Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7341909Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7342234Z return mod(**inputs) 2025-09-07T08:09:52.7342582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7342956Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7343325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7343701Z layer_outputs = layer_module( 2025-09-07T08:09:52.7344053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7344416Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7344797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7345359Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7345748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7346128Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7346510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7346918Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7347080Z 2025-09-07T08:09:52.7347196Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7347554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7347872Z return mod(**inputs) 2025-09-07T08:09:52.7348220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7348594Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7348962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7349331Z layer_outputs = layer_module( 2025-09-07T08:09:52.7349673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7350037Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7350407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7350782Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7351152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7351636Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7352054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7352453Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7352609Z 2025-09-07T08:09:52.7352695Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7352896Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7353128Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7353335Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7353566Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7353907Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7354218Z return mod(**inputs) 2025-09-07T08:09:52.7354554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7354924Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7355289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7355676Z layer_outputs = layer_module( 2025-09-07T08:09:52.7356018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7356357Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7356710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7357060Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7357410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7357779Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7358144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7358551Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7358732Z 2025-09-07T08:09:52.7358832Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7359170Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7359477Z return mod(**inputs) 2025-09-07T08:09:52.7359806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7360166Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7360512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7360869Z layer_outputs = layer_module( 2025-09-07T08:09:52.7361202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7361554Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7361915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7362291Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7362660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7363036Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7363408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7363853Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7364072Z 2025-09-07T08:09:52.7364152Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7364369Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7364623Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7364861Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7365229Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7365581Z return mod(**inputs) 2025-09-07T08:09:52.7365966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7366350Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7366738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7367176Z layer_outputs = layer_module( 2025-09-07T08:09:52.7367536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7367924Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7368325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7368700Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7369083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7369466Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7369853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7370263Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7370434Z 2025-09-07T08:09:52.7370539Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7370904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7371235Z return mod(**inputs) 2025-09-07T08:09:52.7371588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7371967Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7372342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7372721Z layer_outputs = layer_module( 2025-09-07T08:09:52.7373073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7373442Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7373831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7374216Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7374599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7374984Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7375366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7375786Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7375957Z 2025-09-07T08:09:52.7376040Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7376260Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7376468Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7376683Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7376896Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7377136Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7377499Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7377822Z return mod(**inputs) 2025-09-07T08:09:52.7378181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7378584Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7378983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7379357Z layer_outputs = layer_module( 2025-09-07T08:09:52.7379692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7380047Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7380428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7380795Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7381156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7381526Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7381889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7382301Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7382473Z 2025-09-07T08:09:52.7382581Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7382940Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7383260Z return mod(**inputs) 2025-09-07T08:09:52.7383600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7383960Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7384310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7384674Z layer_outputs = layer_module( 2025-09-07T08:09:52.7385010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7385363Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7385728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7386097Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7386461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7386825Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7387189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7387615Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7387824Z 2025-09-07T08:09:52.7387901Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7388113Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7388316Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7388543Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7388889Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7389205Z return mod(**inputs) 2025-09-07T08:09:52.7389548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7389919Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7390275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7390634Z layer_outputs = layer_module( 2025-09-07T08:09:52.7390969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7391321Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7391685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7392107Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7392474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7392850Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7393210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7393595Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7393746Z 2025-09-07T08:09:52.7393862Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7394210Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7394517Z return mod(**inputs) 2025-09-07T08:09:52.7394849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7395198Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7395546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7395896Z layer_outputs = layer_module( 2025-09-07T08:09:52.7396241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7396582Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7396928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7397286Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7397638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7398005Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7398369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7398761Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7398920Z 2025-09-07T08:09:52.7398996Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7399201Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7399405Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7399600Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7399827Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7400172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7400490Z return mod(**inputs) 2025-09-07T08:09:52.7400814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7401175Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7401532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7401892Z layer_outputs = layer_module( 2025-09-07T08:09:52.7402224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7402565Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7402927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7403294Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7403654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7404021Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7404378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7404792Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7405023Z 2025-09-07T08:09:52.7405128Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7405492Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7405800Z return mod(**inputs) 2025-09-07T08:09:52.7406142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7406508Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7406955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7407353Z layer_outputs = layer_module( 2025-09-07T08:09:52.7407697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7408093Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7408511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7408897Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7409288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7409675Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7410047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7410522Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7410734Z 2025-09-07T08:09:52.7410822Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7411037Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7411240Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7411470Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7411821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7412150Z return mod(**inputs) 2025-09-07T08:09:52.7412490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7412856Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7413225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7413590Z layer_outputs = layer_module( 2025-09-07T08:09:52.7413925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7414281Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7414652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7415024Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7415395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7415762Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7416140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7416540Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7416696Z 2025-09-07T08:09:52.7416803Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7417152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7417466Z return mod(**inputs) 2025-09-07T08:09:52.7417808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7418178Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7418539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7418925Z layer_outputs = layer_module( 2025-09-07T08:09:52.7419260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7419615Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7419976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7420343Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7420717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7421091Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7421454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7421853Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7422012Z 2025-09-07T08:09:52.7422090Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7422301Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7422507Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7422711Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7422930Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7423154Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7423500Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7423818Z return mod(**inputs) 2025-09-07T08:09:52.7424156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7424512Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7424874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7425233Z layer_outputs = layer_module( 2025-09-07T08:09:52.7425570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7425917Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7426281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7426637Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7426990Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7427350Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7427696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7428104Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7428286Z 2025-09-07T08:09:52.7428387Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7428735Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7429054Z return mod(**inputs) 2025-09-07T08:09:52.7429391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7429758Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7430116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7430479Z layer_outputs = layer_module( 2025-09-07T08:09:52.7430807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7431154Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7431518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7431927Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7432299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7432673Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7433044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 433, in forward 2025-09-07T08:09:52.7433494Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:09:52.7433693Z 2025-09-07T08:09:52.7433793Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7433996Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7434188Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7434417Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7434756Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7435062Z return mod(**inputs) 2025-09-07T08:09:52.7435391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7435748Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7436142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7436512Z layer_outputs = layer_module( 2025-09-07T08:09:52.7436840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7437176Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7437532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7437892Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7438249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7438605Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7438961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7439348Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7439502Z 2025-09-07T08:09:52.7439606Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7439948Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7440256Z return mod(**inputs) 2025-09-07T08:09:52.7440598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1750, in forward 2025-09-07T08:09:52.7440963Z encoder_outputs = self.encoder( 2025-09-07T08:09:52.7441325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7441684Z layer_outputs = layer_module( 2025-09-07T08:09:52.7442015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7442365Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7442745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7443110Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7443505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7443887Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7444262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7444667Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7444825Z 2025-09-07T08:09:52.7444911Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7445289Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7445511Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7445758Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7446142Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7446471Z return mod(**inputs) 2025-09-07T08:09:52.7446847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7447332Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7447722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7448139Z layer_outputs = layer_module( 2025-09-07T08:09:52.7448511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7448870Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7449247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7449629Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7450032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7450410Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7450783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7451198Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7451374Z 2025-09-07T08:09:52.7451458Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7451658Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7451862Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7452094Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7452445Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7452752Z return mod(**inputs) 2025-09-07T08:09:52.7453101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7453469Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7453834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7454202Z layer_outputs = layer_module( 2025-09-07T08:09:52.7454536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7454893Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7455258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7455630Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7455999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7456364Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7456742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7457138Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7457296Z 2025-09-07T08:09:52.7457404Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7457744Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7458062Z return mod(**inputs) 2025-09-07T08:09:52.7458403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7458770Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7459195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7459607Z layer_outputs = layer_module( 2025-09-07T08:09:52.7459936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7460274Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7460638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7461023Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7461380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7461749Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7462126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7462510Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7462664Z 2025-09-07T08:09:52.7462740Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7462944Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7463159Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7463357Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7463545Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7463766Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7464110Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7464425Z return mod(**inputs) 2025-09-07T08:09:52.7464756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7465105Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7465455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7465807Z layer_outputs = layer_module( 2025-09-07T08:09:52.7466129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7466468Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7466828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7467198Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7467566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7467926Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7468277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7468679Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7468857Z 2025-09-07T08:09:52.7468933Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7469140Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7469348Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7469572Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7469923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7470243Z return mod(**inputs) 2025-09-07T08:09:52.7470590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7470947Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7471305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7471667Z layer_outputs = layer_module( 2025-09-07T08:09:52.7472002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7472425Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7472775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7473132Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7473498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7473872Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7474250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7474649Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7474817Z 2025-09-07T08:09:52.7474919Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7475282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7475597Z return mod(**inputs) 2025-09-07T08:09:52.7475925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7476298Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7476650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7476999Z layer_outputs = layer_module( 2025-09-07T08:09:52.7477321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7477672Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7478033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7478402Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7478768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7479135Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7479507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7479900Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7480057Z 2025-09-07T08:09:52.7480142Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7480350Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7480549Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7480779Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7481129Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7481444Z return mod(**inputs) 2025-09-07T08:09:52.7481777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7482143Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7482503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7482872Z layer_outputs = layer_module( 2025-09-07T08:09:52.7483209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7483555Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7483927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7484305Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7484680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7485059Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7485456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7485895Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7486072Z 2025-09-07T08:09:52.7486159Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7486374Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7486575Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7486811Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7487282Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7487620Z return mod(**inputs) 2025-09-07T08:09:52.7487980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7488368Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7488726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7489093Z layer_outputs = layer_module( 2025-09-07T08:09:52.7489426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7489786Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7490152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7490519Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7490886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7491258Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7491618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7492010Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7492174Z 2025-09-07T08:09:52.7492275Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7492621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7492932Z return mod(**inputs) 2025-09-07T08:09:52.7493270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7493630Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7493983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7494341Z layer_outputs = layer_module( 2025-09-07T08:09:52.7494667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7495013Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7495381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7495741Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7496093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7496452Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7496805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7497192Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7497346Z 2025-09-07T08:09:52.7497428Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7497625Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7497824Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7498021Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7498214Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7498427Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7498800Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7499114Z return mod(**inputs) 2025-09-07T08:09:52.7499459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7499824Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7500172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7500543Z layer_outputs = layer_module( 2025-09-07T08:09:52.7500881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7501230Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7501587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7501955Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7502317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7502685Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7503065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7503459Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7503639Z 2025-09-07T08:09:52.7503716Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7503920Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7504125Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7504346Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7504704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7505010Z return mod(**inputs) 2025-09-07T08:09:52.7505347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7505707Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7506050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7506400Z layer_outputs = layer_module( 2025-09-07T08:09:52.7506728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7507071Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7507417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7507777Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7508134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7508496Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7508850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7509226Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7509388Z 2025-09-07T08:09:52.7509487Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7509826Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7510133Z return mod(**inputs) 2025-09-07T08:09:52.7510462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7510809Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7511156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7511510Z layer_outputs = layer_module( 2025-09-07T08:09:52.7511880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7512217Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7512573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7512934Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7513310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7513675Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7514028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7514423Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7514586Z 2025-09-07T08:09:52.7514665Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7514877Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7515109Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7515449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7515781Z return mod(**inputs) 2025-09-07T08:09:52.7516114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7516472Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7516823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7517179Z layer_outputs = layer_module( 2025-09-07T08:09:52.7517523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7517871Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7518230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7518591Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7518952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7519319Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7519683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7520081Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7520264Z 2025-09-07T08:09:52.7520342Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7520552Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7520758Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7520988Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7521334Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7521659Z return mod(**inputs) 2025-09-07T08:09:52.7522010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7522386Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7522739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7523105Z layer_outputs = layer_module( 2025-09-07T08:09:52.7523448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7523812Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7524192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7524568Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7524961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7525360Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7525737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7526143Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7526304Z 2025-09-07T08:09:52.7526408Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7526819Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7527264Z return mod(**inputs) 2025-09-07T08:09:52.7527643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7528054Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7528456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7528824Z layer_outputs = layer_module( 2025-09-07T08:09:52.7529162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7529533Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7529893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7530268Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7530648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7531030Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7531400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7531809Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7531980Z 2025-09-07T08:09:52.7532061Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7532272Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7532492Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7532687Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7532888Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7533115Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7533469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7533780Z return mod(**inputs) 2025-09-07T08:09:52.7534122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7534488Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7534844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7535208Z layer_outputs = layer_module( 2025-09-07T08:09:52.7535540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7535898Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7536277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7536648Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7537008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7537380Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7537748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7538160Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7538333Z 2025-09-07T08:09:52.7538449Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7538675Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7538879Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7539106Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7539452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7539761Z return mod(**inputs) 2025-09-07T08:09:52.7540102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7540481Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7540838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7541198Z layer_outputs = layer_module( 2025-09-07T08:09:52.7541521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7541873Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7542235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7542603Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7542982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7543353Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7543719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7544112Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7544268Z 2025-09-07T08:09:52.7544376Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7544716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7545138Z return mod(**inputs) 2025-09-07T08:09:52.7545511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7545900Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7546266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7546624Z layer_outputs = layer_module( 2025-09-07T08:09:52.7546969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7547326Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7547697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7548067Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7548435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7548812Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7549183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7549579Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7549742Z 2025-09-07T08:09:52.7549818Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7550026Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7550229Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7550461Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7550803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7551127Z return mod(**inputs) 2025-09-07T08:09:52.7551465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7551863Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7552239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7552584Z layer_outputs = layer_module( 2025-09-07T08:09:52.7552916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7553268Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7553658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7554033Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7554396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7554772Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7555142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7555557Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7555730Z 2025-09-07T08:09:52.7555806Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7556010Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7556240Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7556468Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7556821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7557136Z return mod(**inputs) 2025-09-07T08:09:52.7557468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7557829Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7558182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7558534Z layer_outputs = layer_module( 2025-09-07T08:09:52.7558879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7559214Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7559572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7559929Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7560278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7560638Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7561036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7561432Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7561588Z 2025-09-07T08:09:52.7561697Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7562041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7562354Z return mod(**inputs) 2025-09-07T08:09:52.7562693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7563056Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7563406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7563773Z layer_outputs = layer_module( 2025-09-07T08:09:52.7564115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7564472Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7564842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7565249Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7565623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7566013Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7566404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7566815Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7567071Z 2025-09-07T08:09:52.7567187Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7567431Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7567669Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7567901Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7568157Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7568561Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7568900Z return mod(**inputs) 2025-09-07T08:09:52.7569264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7569642Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7570042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7570421Z layer_outputs = layer_module( 2025-09-07T08:09:52.7570776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7571140Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7571513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7571898Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7572278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7572667Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7573042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7573475Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7573664Z 2025-09-07T08:09:52.7573743Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7573964Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7574182Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7574418Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7574782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7575114Z return mod(**inputs) 2025-09-07T08:09:52.7575473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7575852Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7576229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7576610Z layer_outputs = layer_module( 2025-09-07T08:09:52.7576969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7577335Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7577687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7578050Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7578405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7578766Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7579121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7579540Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7579706Z 2025-09-07T08:09:52.7579806Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7580165Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7580480Z return mod(**inputs) 2025-09-07T08:09:52.7580824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7581189Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7581549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7581899Z layer_outputs = layer_module( 2025-09-07T08:09:52.7582224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7582563Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7582927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7583299Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7583683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7584048Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7584425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7584810Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7584960Z 2025-09-07T08:09:52.7585042Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7585242Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7585434Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7585659Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7585999Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7586062Z return mod(**inputs) 2025-09-07T08:09:52.7586298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7586369Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7586601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7586670Z layer_outputs = layer_module( 2025-09-07T08:09:52.7586877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7586960Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7587181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7587269Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7587487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7587570Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7587796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7587914Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7587918Z 2025-09-07T08:09:52.7587999Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7588072Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7588144Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7588250Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7588435Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7588548Z return mod(**inputs) 2025-09-07T08:09:52.7588774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7588853Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7589082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7589150Z layer_outputs = layer_module( 2025-09-07T08:09:52.7589379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7589455Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7589691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7589769Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7589996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7590088Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7590314Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7590445Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7590449Z 2025-09-07T08:09:52.7590549Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7590750Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7590816Z return mod(**inputs) 2025-09-07T08:09:52.7591047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7591128Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7591360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7591438Z layer_outputs = layer_module( 2025-09-07T08:09:52.7591649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7591726Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7591973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7592050Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7592277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7592357Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7592578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7592689Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7592692Z 2025-09-07T08:09:52.7592769Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7592850Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7592923Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7592995Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7593103Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7593290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7593360Z return mod(**inputs) 2025-09-07T08:09:52.7593585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7593661Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7593885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7593953Z layer_outputs = layer_module( 2025-09-07T08:09:52.7594167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7594279Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7594512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7594590Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7594817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7594906Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7595154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7595286Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7595289Z 2025-09-07T08:09:52.7595366Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7595441Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7595525Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7595625Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7595825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7595891Z return mod(**inputs) 2025-09-07T08:09:52.7596143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7596226Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7596460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7596537Z layer_outputs = layer_module( 2025-09-07T08:09:52.7596751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7596836Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7597063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7597143Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7597379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7597462Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7597697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7597803Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7597806Z 2025-09-07T08:09:52.7597905Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7598103Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7598167Z return mod(**inputs) 2025-09-07T08:09:52.7598441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7598515Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7598752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7598822Z layer_outputs = layer_module( 2025-09-07T08:09:52.7599039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7599121Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7599350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7599435Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7599661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7599741Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7599975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7600113Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7600117Z 2025-09-07T08:09:52.7600199Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7600279Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7600358Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7600465Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7600673Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7600744Z return mod(**inputs) 2025-09-07T08:09:52.7600975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7601048Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7601292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7601363Z layer_outputs = layer_module( 2025-09-07T08:09:52.7601585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7601660Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7601943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7602025Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7602263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7602357Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7602592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7602725Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7602731Z 2025-09-07T08:09:52.7602809Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7602886Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7602971Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7603073Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7603280Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7603346Z return mod(**inputs) 2025-09-07T08:09:52.7603588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7603670Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7603908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7603987Z layer_outputs = layer_module( 2025-09-07T08:09:52.7604204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7604292Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7604524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7604608Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7604851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7604937Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7605183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7605291Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7605295Z 2025-09-07T08:09:52.7605396Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7605601Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7605701Z return mod(**inputs) 2025-09-07T08:09:52.7605954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7606031Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7606280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7606360Z layer_outputs = layer_module( 2025-09-07T08:09:52.7606602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7606691Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7607011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7607112Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7607362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7607461Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7607740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7607891Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7607896Z 2025-09-07T08:09:52.7607995Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7608156Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7608249Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7608324Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7608405Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7608504Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7608704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7608768Z return mod(**inputs) 2025-09-07T08:09:52.7609006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7609086Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7609319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7609397Z layer_outputs = layer_module( 2025-09-07T08:09:52.7609608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7609686Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7609922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7610001Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7610241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7610325Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7610559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7610692Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7610697Z 2025-09-07T08:09:52.7610774Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7610860Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7610933Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7611042Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7611232Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7611297Z return mod(**inputs) 2025-09-07T08:09:52.7611538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7611610Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7611889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7611957Z layer_outputs = layer_module( 2025-09-07T08:09:52.7612170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7612252Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7612481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7612581Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7612862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7612941Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7613175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7613282Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7613285Z 2025-09-07T08:09:52.7613390Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7613598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7613668Z return mod(**inputs) 2025-09-07T08:09:52.7613899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7613971Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7614210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7614279Z layer_outputs = layer_module( 2025-09-07T08:09:52.7614495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7614570Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7614802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7614888Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7615116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7615202Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7615429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7615541Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7615545Z 2025-09-07T08:09:52.7615620Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7615694Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7615802Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7615993Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7616066Z return mod(**inputs) 2025-09-07T08:09:52.7616297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7616370Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7616605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7616674Z layer_outputs = layer_module( 2025-09-07T08:09:52.7616894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7616970Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7617198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7617286Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7617516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7617638Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7617868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7617990Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7618002Z 2025-09-07T08:09:52.7618080Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7618156Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7618265Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7618364Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7618557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7618620Z return mod(**inputs) 2025-09-07T08:09:52.7618845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7618926Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7619151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7619240Z layer_outputs = layer_module( 2025-09-07T08:09:52.7619447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7619522Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7619750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7619825Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7620051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7620130Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7620351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7620460Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7620463Z 2025-09-07T08:09:52.7620560Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7620753Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7620815Z return mod(**inputs) 2025-09-07T08:09:52.7621047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7621118Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7621340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7621415Z layer_outputs = layer_module( 2025-09-07T08:09:52.7621623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7621706Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7621926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7622004Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7622230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7622309Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7622537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7622638Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7622641Z 2025-09-07T08:09:52.7622722Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7622799Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7622898Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7623003Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7623076Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7623171Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7623365Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7623429Z return mod(**inputs) 2025-09-07T08:09:52.7623661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7623751Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7623985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7624052Z layer_outputs = layer_module( 2025-09-07T08:09:52.7624257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7624340Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7624562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7624644Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7624880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7624959Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7625196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7625320Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7625323Z 2025-09-07T08:09:52.7625403Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7625478Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7625552Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7625657Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7625849Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7625920Z return mod(**inputs) 2025-09-07T08:09:52.7626149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7626227Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7626455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7626524Z layer_outputs = layer_module( 2025-09-07T08:09:52.7626742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7626828Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7627056Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7627134Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7627356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7627441Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7627665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7627771Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7627775Z 2025-09-07T08:09:52.7627872Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7628059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7628129Z return mod(**inputs) 2025-09-07T08:09:52.7628357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7628435Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7628741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7628813Z layer_outputs = layer_module( 2025-09-07T08:09:52.7629018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7629092Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7629344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 559, in forward 2025-09-07T08:09:52.7629421Z self_attention_outputs = self.layer[0]( 2025-09-07T08:09:52.7629646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 475, in forward 2025-09-07T08:09:52.7629727Z attention_output = self.SelfAttention( 2025-09-07T08:09:52.7629952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7630067Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7630070Z 2025-09-07T08:09:52.7630145Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7630229Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7630324Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7630426Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7630623Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7630688Z return mod(**inputs) 2025-09-07T08:09:52.7630926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7630999Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7631236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7631306Z layer_outputs = layer_module( 2025-09-07T08:09:52.7631520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7631604Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7631831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7631919Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7632146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7632228Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7632461Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 401, in forward 2025-09-07T08:09:52.7632582Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:09:52.7632585Z 2025-09-07T08:09:52.7632668Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7632747Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7632828Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7632941Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7633126Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7633196Z return mod(**inputs) 2025-09-07T08:09:52.7633417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7633488Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7633717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7633785Z layer_outputs = layer_module( 2025-09-07T08:09:52.7633997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7634074Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7634338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7634415Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7634654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7634741Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7634975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 440, in forward 2025-09-07T08:09:52.7635083Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:09:52.7635087Z 2025-09-07T08:09:52.7635183Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7635368Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7635437Z return mod(**inputs) 2025-09-07T08:09:52.7635666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1787, in forward 2025-09-07T08:09:52.7635742Z decoder_outputs = self.decoder( 2025-09-07T08:09:52.7635982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1079, in forward 2025-09-07T08:09:52.7636059Z layer_outputs = layer_module( 2025-09-07T08:09:52.7636268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:09:52.7636344Z return super().__call__(*args, **kwargs) 2025-09-07T08:09:52.7636574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 583, in forward 2025-09-07T08:09:52.7636650Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:09:52.7636877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 512, in forward 2025-09-07T08:09:52.7636958Z attention_output = self.EncDecAttention( 2025-09-07T08:09:52.7637181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 442, in forward 2025-09-07T08:09:52.7637292Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:09:52.7637296Z 2025-09-07T08:09:52.7637371Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7637455Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7637529Z cudagraph partition due to non gpu ops 2025-09-07T08:09:52.7637629Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:09:52.7637822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:09:52.7637885Z return mod(**inputs) 2025-09-07T08:09:52.7638119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mt5/modeling_mt5.py", line 1823, in forward 2025-09-07T08:09:52.7638254Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-09-07T08:09:52.7638261Z 2025-09-07T08:10:12.1024917Z Compilation time (from dynamo_timed): 67.58887872 2025-09-07T08:10:12.1163893Z pass 2025-09-07T08:10:12.1164340Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:10:12.1165606Z TIMING: _recursive_pre_grad_passes:0.08372 _recursive_joint_graph_passes:1.14744 _recursive_post_grad_passes:0.28716 linear_unary_template_precompiling:6.81345 linear_unary_template_autotuning:2.02587 bmm_template_precompiling:1.39805 bmm_template_autotuning:0.2557 async_compile.wait:0.79373 code_gen:19.22965 inductor_compile:53.72987 backend_compile:63.33189 gc:0.00108 entire_frame_compile:67.58888 total_wall_time:67.58888 2025-09-07T08:10:12.1167092Z STATS: call_* op count: 1207 | FakeTensorMode.__torch_dispatch__:56575 | FakeTensor.__torch_dispatch__:7112 | ProxyTorchDispatchMode.__torch_dispatch__:15790 2025-09-07T08:10:12.1167676Z Dynamo produced 1 graphs covering 1207 ops with 0 graph breaks (0 unique) 2025-09-07T08:10:15.3868991Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:10:15.3870606Z import pynvml # type: ignore[import] 2025-09-07T08:10:18.0472949Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:10:18.0473928Z from pkg_resources import resource_filename 2025-09-07T08:10:18.6860329Z 2025-09-07T08:10:18.6982793Z loading model: 0it [00:00, ?it/s]If you want to use `MegatronBertForCausalLM` as a standalone, add `is_decoder=True.` 2025-09-07T08:10:18.6983688Z WARNING:transformers.models.megatron_bert.modeling_megatron_bert:If you want to use `MegatronBertForCausalLM` as a standalone, add `is_decoder=True.` 2025-09-07T08:10:21.8913485Z 2025-09-07T08:10:21.8914903Z loading model: 0it [00:03, ?it/s] 2025-09-07T08:10:21.8917088Z cpu eval MegatronBertForCausalLM 2025-09-07T08:10:23.4794263Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:10:23.9351788Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:10:24.3997008Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:10:51.6893773Z Autotune Choices Stats: 2025-09-07T08:10:51.6898395Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.0897989998520643} 2025-09-07T08:10:51.6928202Z AUTOTUNE linear_unary(512x1024, 1024x1024, 1024) 2025-09-07T08:10:51.6928661Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:10:51.6929547Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:10:51.6929897Z cpp_CppMicroGemmAMX_0 0.0898 ms 100.0% 2025-09-07T08:10:51.6930157Z _linear_pointwise 0.1137 ms 79.0% 2025-09-07T08:10:51.6930589Z SingleProcess AUTOTUNE benchmarking takes 0.2770 seconds and 1.4039 seconds precompiling for 2 choices 2025-09-07T08:10:53.8986594Z Autotune Choices Stats: 2025-09-07T08:10:53.8987213Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 0.3536790002272028} 2025-09-07T08:10:53.8999834Z AUTOTUNE linear_unary(512x1024, 4096x1024, 4096) 2025-09-07T08:10:53.9000182Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:10:53.9000533Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:10:53.9000890Z _linear_pointwise 0.3537 ms 100.0% 2025-09-07T08:10:53.9001218Z cpp_CppMicroGemmAMX_4 0.3928 ms 90.0% 2025-09-07T08:10:53.9001787Z SingleProcess AUTOTUNE benchmarking takes 0.3127 seconds and 1.5267 seconds precompiling for 2 choices 2025-09-07T08:10:55.6599986Z Autotune Choices Stats: 2025-09-07T08:10:55.6600705Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.2261679999264743} 2025-09-07T08:10:55.6614599Z AUTOTUNE linear_unary(512x4096, 1024x4096, 1024) 2025-09-07T08:10:55.6615034Z strides: [4096, 1], [1, 0], [1] 2025-09-07T08:10:55.6615428Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:10:55.6615887Z cpp_CppMicroGemmAMX_5 0.2262 ms 100.0% 2025-09-07T08:10:55.6616268Z _linear_pointwise 0.3401 ms 66.5% 2025-09-07T08:10:55.6616897Z SingleProcess AUTOTUNE benchmarking takes 0.3095 seconds and 1.3297 seconds precompiling for 2 choices 2025-09-07T08:11:08.7419490Z Autotune Choices Stats: 2025-09-07T08:11:08.7420112Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 1.9576859999688168} 2025-09-07T08:11:08.7431832Z AUTOTUNE linear_unary(512x1024, 29056x1024, 29056) 2025-09-07T08:11:08.7432599Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:11:08.7435191Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:11:08.7435508Z _linear_pointwise 1.9577 ms 100.0% 2025-09-07T08:11:08.7435761Z cpp_CppMicroGemmAMX_145 2.5976 ms 75.4% 2025-09-07T08:11:08.7436167Z SingleProcess AUTOTUNE benchmarking takes 0.5903 seconds and 1.4035 seconds precompiling for 2 choices 2025-09-07T08:11:10.2078432Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2078794Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2079368Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2079598Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2079807Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2080053Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2080292Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2080525Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2080750Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2080999Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2081223Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2081453Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2081681Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2081998Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2082230Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2082467Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2082701Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2082929Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2083162Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2083398Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2083631Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2083874Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2084113Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2084340Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2084587Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2084818Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2085052Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2085279Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2085515Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2085747Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2086000Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2086233Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2086478Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2086714Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2087182Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2087433Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2087666Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2087896Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2088135Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2088367Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2088608Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2088832Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2089056Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2089277Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2089523Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2089752Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2089976Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2090205Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2090420Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2090644Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2090863Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2091090Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2091306Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2091526Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2091873Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2092102Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2092322Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2092555Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2092777Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2093004Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2093223Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2093452Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2093704Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2093935Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2094155Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2094384Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2094608Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2094832Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2095052Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2095277Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2095518Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2095747Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2095967Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2096217Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2096439Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2096666Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2096892Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2097113Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2097335Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2097560Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2097794Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2098015Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2098239Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2098522Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2098748Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2098967Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2099193Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2099433Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2099665Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2099877Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2100104Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2100328Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2100552Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2100768Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2100990Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2101211Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2101434Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2101648Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2101872Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2102102Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2102331Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2102559Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2102780Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2103006Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2107810Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2108646Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2109523Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2109848Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2110136Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2110377Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2110612Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2110837Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2111070Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2111292Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2111625Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2111893Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2112124Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2112347Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2112578Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2112806Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2113036Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2113252Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2113474Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2113727Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2113958Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2114184Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2114402Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2114625Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2114854Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2115088Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2115306Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2115528Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2115756Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2115988Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2116262Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2116487Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2116713Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2116942Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2117207Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2117432Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2117656Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2117898Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2118118Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2118339Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2118563Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2118792Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2119012Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2119234Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2119457Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2119685Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2119899Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2120120Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2120341Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2120570Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2120792Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2121048Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2121277Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2121508Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2121741Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2121964Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2122195Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2122424Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2122654Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2122880Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2123107Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2123339Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2123571Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2123793Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2124018Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2124244Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2124472Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2124696Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2124925Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2125151Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2125381Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2125656Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2125882Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2126111Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2126334Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2126566Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2126795Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2127212Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2127455Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2127705Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2127934Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2128165Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2128400Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2128621Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2128850Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2129084Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2129318Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2129539Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2129765Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2129990Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2130236Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2130460Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2130687Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2130914Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2131144Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2131361Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2131585Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2131814Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2132040Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2132262Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2132489Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2132735Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2132965Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2133186Z cudagraph partition due to non gpu ops 2025-09-07T08:11:10.2133460Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:11:10.2133896Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:11:10.2134432Z return mod(**inputs) 2025-09-07T08:11:10.2134949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1086, in forward 2025-09-07T08:11:10.2135445Z lm_loss = self.loss_function( 2025-09-07T08:11:10.2135875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-09-07T08:11:10.2136420Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-09-07T08:11:10.2136973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-09-07T08:11:10.2137547Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-09-07T08:11:10.2137834Z 2025-09-07T08:11:23.6189098Z Compilation time (from dynamo_timed): 57.75938517 2025-09-07T08:11:23.6219852Z pass 2025-09-07T08:11:23.6220250Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:11:23.6226726Z TIMING: _recursive_pre_grad_passes:0.07153 _recursive_joint_graph_passes:0.83705 _recursive_post_grad_passes:0.12042 linear_unary_template_precompiling:5.68839 linear_unary_template_autotuning:1.48133 async_compile.wait:0.83572 code_gen:13.0672 inductor_compile:41.19767 backend_compile:52.18295 gc:0.00114 entire_frame_compile:57.75939 total_wall_time:57.75939 2025-09-07T08:11:23.6228147Z STATS: call_* op count: 725 | FakeTensorMode.__torch_dispatch__:56990 | FakeTensor.__torch_dispatch__:5506 | ProxyTorchDispatchMode.__torch_dispatch__:16350 2025-09-07T08:11:23.6229024Z Dynamo produced 1 graphs covering 725 ops with 0 graph breaks (0 unique) 2025-09-07T08:11:26.9901133Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:11:26.9902243Z import pynvml # type: ignore[import] 2025-09-07T08:11:29.6482338Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:11:29.6483324Z from pkg_resources import resource_filename 2025-09-07T08:11:30.4084533Z 2025-09-07T08:11:33.2609896Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:11:33.2610529Z loading model: 0it [00:02, ?it/s] 2025-09-07T08:11:33.2610846Z cpu eval MegatronBertForQuestionAnswering 2025-09-07T08:11:34.8521262Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:11:35.2368518Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:11:35.6170985Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:12:14.2442559Z Autotune Choices Stats: 2025-09-07T08:12:14.2443075Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_144", "best_time": 0.006486000074801268} 2025-09-07T08:12:14.2444960Z AUTOTUNE linear_unary(512x1024, 2x1024, 2) 2025-09-07T08:12:14.2445756Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:12:14.2446140Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:12:14.2446592Z cpp_CppMicroGemmAMX_144 0.0065 ms 100.0% 2025-09-07T08:12:14.2447614Z _linear_pointwise 0.0390 ms 16.6% 2025-09-07T08:12:14.2448094Z SingleProcess AUTOTUNE benchmarking takes 0.2629 seconds and 1.3931 seconds precompiling for 2 choices 2025-09-07T08:12:15.7307809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:12:15.7308399Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:12:15.7308796Z return mod(**inputs) 2025-09-07T08:12:15.7309348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1611, in forward 2025-09-07T08:12:15.7309886Z logits = self.qa_outputs(sequence_output) 2025-09-07T08:12:15.7310042Z 2025-09-07T08:12:15.7310155Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7310402Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7310670Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7310923Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7311158Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7311375Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7311597Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7311831Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7312058Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7312277Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7312502Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7312738Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7312966Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7313174Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7313394Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7313618Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7313857Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7314075Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7314673Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7314983Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7315215Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7315435Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7315650Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7315888Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7316115Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7316345Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7316555Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7316817Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7317028Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7317231Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7317441Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7317651Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7317859Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7318065Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7318278Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7318489Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7318697Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7318900Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7319158Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7319371Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7319581Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7319784Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7320054Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7320269Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7320492Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7320694Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7320907Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7321128Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7321353Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7321569Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7321794Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7322040Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7322258Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7322479Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7322692Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7322913Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7323131Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7323351Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7323567Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7323789Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7324010Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7324230Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7324442Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7324667Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7324892Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7325114Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7325330Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7325550Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7325774Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7325995Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7326207Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7326431Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7326655Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7327040Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7327269Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7327493Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7327715Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7327937Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7328168Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7328443Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7328656Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7328867Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7329068Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7329278Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7329487Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7329696Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7329898Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7330141Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7330351Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7330563Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7330765Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7330973Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7331183Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7331392Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7331597Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7331806Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7332016Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7332226Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7332453Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7332707Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7332921Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7333131Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7333342Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7333561Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7333761Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7334068Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7334289Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7334497Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7334702Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7334917Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7335131Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7335342Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7335555Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7335782Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7336004Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7336227Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7336429Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7336640Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7336851Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7337059Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7337262Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7337479Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7337681Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7337886Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7338088Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7338294Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7338498Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7338709Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7338913Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7339124Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7339335Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7339542Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7339749Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7339958Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7340167Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7340377Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7340589Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7340795Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7340999Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7341251Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7341450Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7341655Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7341857Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7342060Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7342260Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7342464Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7342670Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7342903Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7343103Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7343307Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7343511Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7343718Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7343917Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7344123Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7344330Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7344536Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7344734Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7344938Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7345543Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7345761Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7345964Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7346172Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7346383Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7346587Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7346791Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7346996Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7347236Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7347446Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7347656Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7347864Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7348074Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7348283Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7348494Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7348701Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7348921Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7349127Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7349330Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7349529Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7349733Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7349938Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7350143Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7350341Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7350546Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7350751Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7350961Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7351166Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7351364Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7351561Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7351760Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7351950Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7352149Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7352344Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7352543Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7352736Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7352934Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7353131Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7353330Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7353522Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7353723Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7353988Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7354191Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7354385Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7354586Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7354787Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7354988Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7355183Z cudagraph partition due to non gpu ops 2025-09-07T08:12:15.7355424Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:12:15.7355856Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:12:15.7356205Z return mod(**inputs) 2025-09-07T08:12:15.7356632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1629, in forward 2025-09-07T08:12:15.7357090Z start_loss = loss_fct(start_logits, start_positions) 2025-09-07T08:12:15.7357262Z 2025-09-07T08:12:15.7357388Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:12:15.7357749Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:12:15.7358074Z return mod(**inputs) 2025-09-07T08:12:15.7358491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/megatron_bert/modeling_megatron_bert.py", line 1630, in forward 2025-09-07T08:12:15.7358928Z end_loss = loss_fct(end_logits, end_positions) 2025-09-07T08:12:15.7359088Z 2025-09-07T08:12:28.3586481Z Compilation time (from dynamo_timed): 51.401993616 2025-09-07T08:12:28.3591141Z pass 2025-09-07T08:12:28.3594008Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:12:28.3595016Z TIMING: _recursive_pre_grad_passes:0.06791 _recursive_joint_graph_passes:0.81921 _recursive_post_grad_passes:0.1225 linear_unary_template_precompiling:1.41847 linear_unary_template_autotuning:0.26089 async_compile.wait:0.74545 code_gen:12.42267 inductor_compile:34.90193 backend_compile:45.97575 gc:0.00068 entire_frame_compile:51.40199 total_wall_time:51.40199 2025-09-07T08:12:28.3596125Z STATS: call_* op count: 726 | FakeTensorMode.__torch_dispatch__:56810 | FakeTensor.__torch_dispatch__:5530 | ProxyTorchDispatchMode.__torch_dispatch__:16350 2025-09-07T08:12:28.3596656Z Dynamo produced 1 graphs covering 726 ops with 0 graph breaks (0 unique) 2025-09-07T08:12:31.6808520Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:12:31.6810469Z import pynvml # type: ignore[import] 2025-09-07T08:12:34.3600892Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:12:34.3602081Z from pkg_resources import resource_filename 2025-09-07T08:12:35.0878610Z 2025-09-07T08:12:35.6318322Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:12:35.6319004Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:12:35.6319488Z cpu eval MobileBertForMaskedLM 2025-09-07T08:12:35.9106088Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:12:36.1336045Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:12:36.3507883Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:13:19.0493706Z Autotune Choices Stats: 2025-09-07T08:13:19.0499274Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.012140000308136223} 2025-09-07T08:13:19.0506546Z AUTOTUNE linear_unary(128x384, 512x384, 512) 2025-09-07T08:13:19.0507015Z strides: [384, 1], [1, 0], [1] 2025-09-07T08:13:19.0507691Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:13:19.0508033Z cpp_CppMicroGemmAMX_0 0.0121 ms 100.0% 2025-09-07T08:13:19.0508312Z _linear_pointwise 0.0690 ms 17.6% 2025-09-07T08:13:19.0508697Z SingleProcess AUTOTUNE benchmarking takes 0.2544 seconds and 1.3090 seconds precompiling for 2 choices 2025-09-07T08:13:20.7243641Z Autotune Choices Stats: 2025-09-07T08:13:20.7244508Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_1", "best_time": 0.007240000286401482} 2025-09-07T08:13:20.7256156Z AUTOTUNE linear_unary(128x512, 128x512, 128) 2025-09-07T08:13:20.7256443Z strides: [512, 1], [1, 0], [1] 2025-09-07T08:13:20.7256771Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:13:20.7257051Z cpp_CppMicroGemmAMX_1 0.0072 ms 100.0% 2025-09-07T08:13:20.7257304Z _linear_pointwise 0.0486 ms 14.9% 2025-09-07T08:13:20.7257699Z SingleProcess AUTOTUNE benchmarking takes 0.2516 seconds and 1.2967 seconds precompiling for 2 choices 2025-09-07T08:13:22.3737149Z Autotune Choices Stats: 2025-09-07T08:13:22.3738150Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_2", "best_time": 0.007665999874006957} 2025-09-07T08:13:22.3749418Z AUTOTUNE linear_unary(128x128, 128x128, 128) 2025-09-07T08:13:22.3752156Z strides: [128, 1], [1, 0], [1] 2025-09-07T08:13:22.3752641Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:13:22.3753087Z cpp_CppMicroGemmAMX_2 0.0077 ms 100.0% 2025-09-07T08:13:22.3753856Z _linear_pointwise 0.0382 ms 20.1% 2025-09-07T08:13:22.3754312Z SingleProcess AUTOTUNE benchmarking takes 0.2510 seconds and 1.3119 seconds precompiling for 2 choices 2025-09-07T08:13:24.3505092Z Autotune Choices Stats: 2025-09-07T08:13:24.3505565Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_7", "best_time": 0.011017999895557296} 2025-09-07T08:13:24.3510944Z AUTOTUNE linear_unary(128x128, 512x128, 512) 2025-09-07T08:13:24.3511357Z strides: [128, 1], [1, 0], [1] 2025-09-07T08:13:24.3511698Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:13:24.3512108Z cpp_CppMicroGemmAMX_7 0.0110 ms 100.0% 2025-09-07T08:13:24.3512424Z _linear_pointwise 0.0576 ms 19.1% 2025-09-07T08:13:24.3512958Z SingleProcess AUTOTUNE benchmarking takes 0.2535 seconds and 1.3104 seconds precompiling for 2 choices 2025-09-07T08:13:26.5576780Z Autotune Choices Stats: 2025-09-07T08:13:26.5577412Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_15", "best_time": 0.010115999884874327} 2025-09-07T08:13:26.5589852Z AUTOTUNE linear_unary(128x128, 512x128, 512) 2025-09-07T08:13:26.5591606Z strides: [128, 1], [1, 0], [1] 2025-09-07T08:13:26.5591981Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:13:26.5592387Z cpp_CppMicroGemmAMX_15 0.0101 ms 100.0% 2025-09-07T08:13:26.5592740Z _linear_pointwise 0.0569 ms 17.8% 2025-09-07T08:13:26.5593327Z SingleProcess AUTOTUNE benchmarking takes 0.2519 seconds and 1.3187 seconds precompiling for 2 choices 2025-09-07T08:13:56.5404146Z Autotune Choices Stats: 2025-09-07T08:13:56.5404794Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_361", "best_time": 0.0185349999810569} 2025-09-07T08:13:56.5416650Z AUTOTUNE linear_unary(128x512, 512x512, 512) 2025-09-07T08:13:56.5417116Z strides: [512, 1], [1, 0], [1] 2025-09-07T08:13:56.5417458Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:13:56.5417833Z cpp_CppMicroGemmAMX_361 0.0185 ms 100.0% 2025-09-07T08:13:56.5418142Z _linear_pointwise 0.0747 ms 24.8% 2025-09-07T08:13:56.5418708Z SingleProcess AUTOTUNE benchmarking takes 0.2605 seconds and 1.3319 seconds precompiling for 2 choices 2025-09-07T08:13:58.3220566Z Autotune Choices Stats: 2025-09-07T08:13:58.3221037Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_362", "best_time": 0.26724799977273506} 2025-09-07T08:13:58.3230598Z AUTOTUNE linear_unary(128x512, 30522x512) 2025-09-07T08:13:58.3230896Z strides: [512, 1], [1, 0] 2025-09-07T08:13:58.3231129Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:13:58.3231400Z cpp_CppMicroGemmAMX_362 0.2672 ms 100.0% 2025-09-07T08:13:58.3231637Z _linear_pointwise 0.4445 ms 60.1% 2025-09-07T08:13:58.3232021Z SingleProcess AUTOTUNE benchmarking takes 0.3509 seconds and 1.3028 seconds precompiling for 2 choices 2025-09-07T08:13:59.8135349Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:13:59.8135901Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:13:59.8136306Z return mod(**inputs) 2025-09-07T08:13:59.8136823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-09-07T08:13:59.8137290Z outputs = self.mobilebert( 2025-09-07T08:13:59.8137743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-09-07T08:13:59.8138248Z embedding_output = self.embeddings( 2025-09-07T08:13:59.8138812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 199, in forward 2025-09-07T08:13:59.8139235Z inputs_embeds = torch.cat( 2025-09-07T08:13:59.8139368Z 2025-09-07T08:13:59.8139487Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:13:59.8139895Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:13:59.8140286Z return mod(**inputs) 2025-09-07T08:13:59.8140697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 976, in forward 2025-09-07T08:13:59.8141129Z outputs = self.mobilebert( 2025-09-07T08:13:59.8141583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-09-07T08:13:59.8142025Z embedding_output = self.embeddings( 2025-09-07T08:13:59.8142471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 208, in forward 2025-09-07T08:13:59.8142952Z inputs_embeds = self.embedding_transformation(inputs_embeds) 2025-09-07T08:13:59.8143139Z 2025-09-07T08:13:59.8143225Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8143458Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8143684Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8143916Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8144124Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8144335Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8144596Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8144800Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8145163Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8145395Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8145611Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8145817Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8146030Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8146242Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8146452Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8146655Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8146864Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8147076Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8147286Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8147490Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8147697Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8147905Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8148114Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8148375Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8148636Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8148846Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8149058Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8149259Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8149489Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8149695Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8149904Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8150107Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8150357Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8150583Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8150804Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8151021Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8151232Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8151441Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8151654Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8151857Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8152066Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8152274Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8152480Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8152727Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8152931Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8153139Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8153348Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8153559Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8153763Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8153972Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8154181Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8154390Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8154592Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8154802Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8155013Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8155227Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8155442Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8155664Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8155886Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8156108Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8156328Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8156536Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8156748Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8156956Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8157164Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8157364Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8157572Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8157779Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8157988Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8158191Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8158399Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8158608Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8158815Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8159020Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8159228Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8159438Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8159646Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8159850Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8160059Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8160269Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8160488Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8160701Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8160923Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8161175Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8161420Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8161633Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8161857Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8162077Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8162296Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8162507Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8162734Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8162956Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8163194Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8163410Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8163632Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8163851Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8164071Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8164283Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8164523Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8164748Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8164970Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8165190Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8165401Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8165641Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8165863Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8166080Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8166293Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8166513Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8166731Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8167126Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8167346Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8167572Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8167796Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8168028Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8168257Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8168496Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8168717Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8168943Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8169163Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8169389Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8169614Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8169837Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8170054Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8170277Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8170505Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8170731Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8170949Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8171176Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8171399Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8171627Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8171841Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8172067Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8172290Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8172515Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8172731Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8172955Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8173176Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8173399Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8173616Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8173839Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8174063Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8174288Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8174491Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8174704Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8174958Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8175168Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8175380Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8175583Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8175795Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8176022Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8176235Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8176439Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8176694Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8176925Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8177144Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8177356Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8177580Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8177788Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8177998Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8178206Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8178417Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8178629Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8178841Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8179071Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8179278Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8179489Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8179739Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8179960Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8180168Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8180380Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8180591Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8180801Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8181004Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8181215Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8181432Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8181646Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8181850Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8182062Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8182279Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8182489Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8182705Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8182916Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8183129Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8183341Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8183546Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8183756Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8183969Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8184181Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8184384Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8184598Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8184810Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8185020Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8185227Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8185441Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8185654Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8185868Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8186073Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8186289Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8187134Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8187541Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8193094Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8195056Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8195466Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8198859Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8199561Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8199906Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8200625Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8200952Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8201213Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8201465Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8201705Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8201952Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8202399Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8202638Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8202860Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8203093Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8203319Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8203553Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8203776Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8204021Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8204246Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8204459Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8204661Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8204931Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8205161Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8205390Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8205618Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8205844Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8206064Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8206293Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8206521Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8206739Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8207247Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8207478Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8207709Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8207928Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8208149Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8208373Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8208612Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8208829Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8209042Z cudagraph partition due to non gpu ops 2025-09-07T08:13:59.8209311Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:13:59.8209728Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:13:59.8210102Z return mod(**inputs) 2025-09-07T08:13:59.8210582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 994, in forward 2025-09-07T08:13:59.8211166Z masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:13:59.8211442Z 2025-09-07T08:14:32.3104654Z Compilation time (from dynamo_timed): 114.807020882 2025-09-07T08:14:32.3109500Z pass 2025-09-07T08:14:32.3109887Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:14:32.3110891Z TIMING: _recursive_pre_grad_passes:0.15634 _recursive_joint_graph_passes:1.43658 _recursive_post_grad_passes:0.21737 linear_unary_template_precompiling:9.24351 linear_unary_template_autotuning:1.85923 async_compile.wait:0.80016 code_gen:30.45025 inductor_compile:81.16693 backend_compile:103.06893 gc:0.00051 entire_frame_compile:114.80702 total_wall_time:114.80702 2025-09-07T08:14:32.3111981Z STATS: call_* op count: 1451 | FakeTensorMode.__torch_dispatch__:114780 | FakeTensor.__torch_dispatch__:9495 | ProxyTorchDispatchMode.__torch_dispatch__:31006 2025-09-07T08:14:32.3112507Z Dynamo produced 1 graphs covering 1451 ops with 0 graph breaks (0 unique) 2025-09-07T08:14:36.3792245Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:14:36.3793724Z import pynvml # type: ignore[import] 2025-09-07T08:14:38.9964402Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:14:38.9965492Z from pkg_resources import resource_filename 2025-09-07T08:14:39.8136395Z 2025-09-07T08:14:40.2271355Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:14:40.2275949Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:14:40.2280914Z cpu eval MobileBertForQuestionAnswering 2025-09-07T08:14:40.4396898Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:14:40.6356176Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:14:40.8320014Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:15:52.6095526Z Autotune Choices Stats: 2025-09-07T08:15:52.6096032Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_361", "best_time": 0.004387999979371671} 2025-09-07T08:15:52.6108538Z AUTOTUNE linear_unary(128x512, 2x512, 2) 2025-09-07T08:15:52.6108822Z strides: [512, 1], [1, 0], [1] 2025-09-07T08:15:52.6109084Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:15:52.6109369Z cpp_CppMicroGemmAMX_361 0.0044 ms 100.0% 2025-09-07T08:15:52.6112263Z _linear_pointwise 0.0378 ms 11.6% 2025-09-07T08:15:52.6113091Z SingleProcess AUTOTUNE benchmarking takes 0.2536 seconds and 1.2927 seconds precompiling for 2 choices 2025-09-07T08:15:54.1185232Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:15:54.1185726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:15:54.1186093Z return mod(**inputs) 2025-09-07T08:15:54.1186579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1256, in forward 2025-09-07T08:15:54.1187099Z logits = self.qa_outputs(sequence_output) 2025-09-07T08:15:54.1187251Z 2025-09-07T08:15:54.1187375Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:15:54.1187738Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:15:54.1188096Z return mod(**inputs) 2025-09-07T08:15:54.1188507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-09-07T08:15:54.1188936Z outputs = self.mobilebert( 2025-09-07T08:15:54.1189349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-09-07T08:15:54.1189765Z embedding_output = self.embeddings( 2025-09-07T08:15:54.1190189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 199, in forward 2025-09-07T08:15:54.1190599Z inputs_embeds = torch.cat( 2025-09-07T08:15:54.1190717Z 2025-09-07T08:15:54.1190832Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:15:54.1191188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:15:54.1191517Z return mod(**inputs) 2025-09-07T08:15:54.1191916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1242, in forward 2025-09-07T08:15:54.1192319Z outputs = self.mobilebert( 2025-09-07T08:15:54.1192712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 791, in forward 2025-09-07T08:15:54.1193570Z embedding_output = self.embeddings( 2025-09-07T08:15:54.1193986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 208, in forward 2025-09-07T08:15:54.1194446Z inputs_embeds = self.embedding_transformation(inputs_embeds) 2025-09-07T08:15:54.1194623Z 2025-09-07T08:15:54.1194718Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1195000Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1195207Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1195414Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1195621Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1195826Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1196021Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1196221Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1196431Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1196635Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1196832Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1197038Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1197298Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1197524Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1197731Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1197944Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1198155Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1198367Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1198582Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1198787Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1198997Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1199210Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1199423Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1199633Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1199845Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1200058Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1200266Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1200503Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1200718Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1200932Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1201149Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1201371Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1201600Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1201829Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1202055Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1202277Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1202507Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1202741Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1202974Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1203197Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1203425Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1203652Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1203881Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1204102Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1204330Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1204557Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1204786Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1205005Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1205234Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1205460Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1205684Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1205902Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1206133Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1206406Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1206654Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1207082Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1207429Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1207765Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1208106Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1208446Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1208784Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1209019Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1209286Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1209497Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1209697Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1209906Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1210130Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1210366Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1210591Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1210827Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1211062Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1211294Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1211552Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1211785Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1212012Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1212243Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1212465Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1212700Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1212931Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1213167Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1213386Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1213620Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1213851Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1214088Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1214311Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1214542Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1214772Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1215005Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1215231Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1215464Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1215691Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1215923Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1216149Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1216376Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1216603Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1216883Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1217110Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1217337Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1217570Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1217778Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1217992Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1218199Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1218424Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1218679Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1218914Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1219112Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1219320Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1219531Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1219738Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1219936Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1220142Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1220354Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1220568Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1220801Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1221048Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1221260Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1221470Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1221672Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1221884Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1222112Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1222322Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1222525Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1222748Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1222954Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1223160Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1223364Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1223562Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1223767Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1223981Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1224191Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1224394Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1224603Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1224815Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1225083Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1225298Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1225520Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1225739Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1225971Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1226191Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1226418Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1226641Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1226864Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1227077Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1227300Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1227524Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1227745Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1227961Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1228183Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1228406Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1228627Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1228842Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1229062Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1229287Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1229510Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1229722Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1229946Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1230167Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1230389Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1230612Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1230831Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1231054Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1231262Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1231470Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1231675Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1231891Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1232099Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1232306Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1232509Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1232718Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1232925Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1233134Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1233335Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1233544Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1233753Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1234022Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1234235Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1234459Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1234687Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1234898Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1235103Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1235312Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1235526Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1235788Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1235993Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1236205Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1236417Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1236628Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1237340Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1242061Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1242390Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1242639Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1242898Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1243128Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1243578Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1243812Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1244041Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1244266Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1244497Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1244720Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1244947Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1245378Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1245603Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1245828Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1246056Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1246296Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1246519Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1246748Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1247268Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1247511Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1247737Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1247978Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1248205Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1248457Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1248653Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1248858Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1249064Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1249267Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1249462Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1249665Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1249877Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1250079Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1250275Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1250479Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1250683Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1250883Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1251078Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1251281Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1251485Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1251688Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1251883Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1252084Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1252286Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1252484Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1252681Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1253063Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1253269Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1253477Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1253699Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1253900Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1254102Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1254306Z cudagraph partition due to non gpu ops 2025-09-07T08:15:54.1254549Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:15:54.1254973Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:15:54.1255320Z return mod(**inputs) 2025-09-07T08:15:54.1255747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1274, in forward 2025-09-07T08:15:54.1256201Z start_loss = loss_fct(start_logits, start_positions) 2025-09-07T08:15:54.1256364Z 2025-09-07T08:15:54.1256475Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:15:54.1256841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:15:54.1257169Z return mod(**inputs) 2025-09-07T08:15:54.1257605Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/mobilebert/modeling_mobilebert.py", line 1275, in forward 2025-09-07T08:15:54.1258042Z end_loss = loss_fct(end_logits, end_positions) 2025-09-07T08:15:54.1258193Z 2025-09-07T08:16:26.5202568Z Compilation time (from dynamo_timed): 104.524318119 2025-09-07T08:16:26.5202894Z pass 2025-09-07T08:16:26.5203530Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:16:26.5204596Z TIMING: _recursive_pre_grad_passes:0.15407 _recursive_joint_graph_passes:1.45132 _recursive_post_grad_passes:0.2269 linear_unary_template_precompiling:1.35584 linear_unary_template_autotuning:0.25135 async_compile.wait:0.66548 code_gen:30.42786 inductor_compile:71.03085 backend_compile:92.81825 gc:0.0012 entire_frame_compile:104.52432 total_wall_time:104.52432 2025-09-07T08:16:26.5205803Z STATS: call_* op count: 1455 | FakeTensorMode.__torch_dispatch__:114664 | FakeTensor.__torch_dispatch__:9536 | ProxyTorchDispatchMode.__torch_dispatch__:31013 2025-09-07T08:16:26.5206362Z Dynamo produced 1 graphs covering 1455 ops with 0 graph breaks (0 unique) 2025-09-07T08:16:30.6775500Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:16:30.6776540Z import pynvml # type: ignore[import] 2025-09-07T08:16:33.3464242Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:16:33.3467448Z from pkg_resources import resource_filename 2025-09-07T08:16:34.0135653Z 2025-09-07T08:16:35.5264055Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:16:35.5264375Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:16:35.5264622Z cpu eval OPTForCausalLM 2025-09-07T08:16:37.4165975Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:16:37.8367986Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:16:38.2703314Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:16:56.2448200Z Autotune Choices Stats: 2025-09-07T08:16:56.2448653Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.16462699977637385} 2025-09-07T08:16:56.2465790Z AUTOTUNE linear_unary(2048x768, 768x768, 768) 2025-09-07T08:16:56.2466126Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:16:56.2466485Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:16:56.2466867Z cpp_CppMicroGemmAMX_0 0.1646 ms 100.0% 2025-09-07T08:16:56.2467217Z _linear_pointwise 0.1710 ms 96.3% 2025-09-07T08:16:56.2468565Z SingleProcess AUTOTUNE benchmarking takes 0.3196 seconds and 1.3367 seconds precompiling for 2 choices 2025-09-07T08:16:58.8237726Z Autotune Choices Stats: 2025-09-07T08:16:58.8238842Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 0.7843569999295141} 2025-09-07T08:16:58.8255317Z AUTOTUNE linear_unary(2048x768, 3072x768, 3072) 2025-09-07T08:16:58.8255668Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:16:58.8256011Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:16:58.8256384Z _linear_pointwise 0.7844 ms 100.0% 2025-09-07T08:16:58.8256710Z cpp_CppMicroGemmAMX_4 0.9054 ms 86.6% 2025-09-07T08:16:58.8257269Z SingleProcess AUTOTUNE benchmarking takes 0.4193 seconds and 1.3736 seconds precompiling for 2 choices 2025-09-07T08:17:00.7036322Z Autotune Choices Stats: 2025-09-07T08:17:00.7037582Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.5309220000526693} 2025-09-07T08:17:00.7047233Z AUTOTUNE linear_unary(2048x3072, 768x3072, 768) 2025-09-07T08:17:00.7047620Z strides: [3072, 1], [1, 0], [1] 2025-09-07T08:17:00.7047961Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:17:00.7048263Z cpp_CppMicroGemmAMX_5 0.5309 ms 100.0% 2025-09-07T08:17:00.7048517Z _linear_pointwise 0.6248 ms 85.0% 2025-09-07T08:17:00.7048903Z SingleProcess AUTOTUNE benchmarking takes 0.4190 seconds and 1.3827 seconds precompiling for 2 choices 2025-09-07T08:17:09.8036945Z Autotune Choices Stats: 2025-09-07T08:17:09.8037670Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 27.529274000016812} 2025-09-07T08:17:09.8047276Z AUTOTUNE linear_unary(2048x768, 50272x768) 2025-09-07T08:17:09.8047607Z strides: [768, 1], [1, 0] 2025-09-07T08:17:09.8047841Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:17:09.8048072Z _linear_pointwise 27.5293 ms 100.0% 2025-09-07T08:17:09.8048340Z cpp_CppMicroGemmAMX_72 30.7676 ms 89.5% 2025-09-07T08:17:09.8048733Z SingleProcess AUTOTUNE benchmarking takes 2.5121 seconds and 1.3731 seconds precompiling for 2 choices 2025-09-07T08:17:10.4349238Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4349788Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4350093Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4350383Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4350748Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4351065Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4351416Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4352495Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4352892Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4353124Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4353366Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4353585Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4353806Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4354016Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4354233Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4354447Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4354665Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4354869Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4355080Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4355334Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4355736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4356086Z return mod(**inputs) 2025-09-07T08:17:10.4356848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4357234Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4357651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4358048Z outputs = self.model.decoder( 2025-09-07T08:17:10.4358426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4358865Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4359284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4359697Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4360101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4360517Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4360943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4361396Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4361900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4362347Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4362840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4363386Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4363591Z 2025-09-07T08:17:10.4363722Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4364130Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4364507Z return mod(**inputs) 2025-09-07T08:17:10.4364883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4365298Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4365719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4366153Z outputs = self.model.decoder( 2025-09-07T08:17:10.4366558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4367183Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4367626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4368100Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4368494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4368913Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4369339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4369806Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4370245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4370706Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4371203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4371726Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4371910Z 2025-09-07T08:17:10.4372022Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4372301Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4372875Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4373286Z return mod(**inputs) 2025-09-07T08:17:10.4373657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4374068Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4374484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4374912Z outputs = self.model.decoder( 2025-09-07T08:17:10.4375332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4375725Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4376140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4376561Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4376961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4377356Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4377795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4378218Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4378393Z 2025-09-07T08:17:10.4378505Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4378894Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4379259Z return mod(**inputs) 2025-09-07T08:17:10.4379612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4380005Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4380408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4380818Z outputs = self.model.decoder( 2025-09-07T08:17:10.4381194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4381577Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4381979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4382385Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4382766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4383162Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4383568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4384003Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4384183Z 2025-09-07T08:17:10.4384297Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4384685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4385031Z return mod(**inputs) 2025-09-07T08:17:10.4385392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4385780Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4386184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4386595Z outputs = self.model.decoder( 2025-09-07T08:17:10.4386971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4387367Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4387766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4388214Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4388582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4388973Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4389378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 288, in forward 2025-09-07T08:17:10.4389793Z hidden_states = self.fc2(hidden_states) 2025-09-07T08:17:10.4389945Z 2025-09-07T08:17:10.4390062Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4390292Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4390522Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4390763Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4390974Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4391183Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4391396Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4391642Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4392016Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4392331Z return mod(**inputs) 2025-09-07T08:17:10.4392683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4393038Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4393410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4393785Z outputs = self.model.decoder( 2025-09-07T08:17:10.4394121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4394652Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4395033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4395419Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4395777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4396153Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4396541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4396945Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4397350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4397744Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4398196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4398694Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4398895Z 2025-09-07T08:17:10.4399001Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4399374Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4399706Z return mod(**inputs) 2025-09-07T08:17:10.4400044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4400413Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4400803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4401195Z outputs = self.model.decoder( 2025-09-07T08:17:10.4401547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4401912Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4402297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4402739Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4403118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4403510Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4403922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4404401Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4404835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4405256Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4405738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4406239Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4406417Z 2025-09-07T08:17:10.4406516Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4406778Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4408580Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4408975Z return mod(**inputs) 2025-09-07T08:17:10.4409340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4409733Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4410142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4410561Z outputs = self.model.decoder( 2025-09-07T08:17:10.4410938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4411322Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4411728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4412127Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4412516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4412919Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4413329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4413762Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4413929Z 2025-09-07T08:17:10.4414040Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4414428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4414781Z return mod(**inputs) 2025-09-07T08:17:10.4415154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4415555Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4415970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4416358Z outputs = self.model.decoder( 2025-09-07T08:17:10.4416715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4417082Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4417455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4417830Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4418178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4418576Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4418967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4419382Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4419544Z 2025-09-07T08:17:10.4419651Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4420009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4420338Z return mod(**inputs) 2025-09-07T08:17:10.4420681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4421042Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4421417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4421795Z outputs = self.model.decoder( 2025-09-07T08:17:10.4422135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4422490Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4422879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4423254Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4423602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4423964Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4424345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 288, in forward 2025-09-07T08:17:10.4424730Z hidden_states = self.fc2(hidden_states) 2025-09-07T08:17:10.4424868Z 2025-09-07T08:17:10.4424957Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4425177Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4425385Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4425597Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4425809Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4426022Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4426227Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4426472Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4426841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4427173Z return mod(**inputs) 2025-09-07T08:17:10.4427489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4427833Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4428207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4428581Z outputs = self.model.decoder( 2025-09-07T08:17:10.4428931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4429278Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4429652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4430028Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4430378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4430734Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4431114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4431515Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4431920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4432351Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4432814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4433314Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4433509Z 2025-09-07T08:17:10.4433628Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4433985Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4434334Z return mod(**inputs) 2025-09-07T08:17:10.4434656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4435019Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4435389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4435767Z outputs = self.model.decoder( 2025-09-07T08:17:10.4436106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4436476Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4436866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4437241Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4437587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4437940Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4438319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4438715Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4439110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4439509Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4439940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4440398Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4440566Z 2025-09-07T08:17:10.4440649Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4440887Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4441245Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4441570Z return mod(**inputs) 2025-09-07T08:17:10.4441896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4442255Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4442639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4443018Z outputs = self.model.decoder( 2025-09-07T08:17:10.4443374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4443735Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4444116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4444488Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4444847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4445630Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4446024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4446464Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4446703Z 2025-09-07T08:17:10.4446851Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4447315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4447681Z return mod(**inputs) 2025-09-07T08:17:10.4448051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4448428Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4448852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4449270Z outputs = self.model.decoder( 2025-09-07T08:17:10.4449657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4450050Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4450463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4450898Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4451290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4451719Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4452148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4452602Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4452779Z 2025-09-07T08:17:10.4452873Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4453113Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4453349Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4453574Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4453807Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4454038Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4454275Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4454523Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4454902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4455235Z return mod(**inputs) 2025-09-07T08:17:10.4455574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4455941Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4456318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4456702Z outputs = self.model.decoder( 2025-09-07T08:17:10.4457064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4457437Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4457813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4458200Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4458559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4458931Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4459318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4459725Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4460134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4460542Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4461001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4461490Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4461737Z 2025-09-07T08:17:10.4461846Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4462208Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4462546Z return mod(**inputs) 2025-09-07T08:17:10.4462883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4463240Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4463640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4464034Z outputs = self.model.decoder( 2025-09-07T08:17:10.4464401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4464773Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4465158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4465553Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4465918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4466313Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4466692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4467113Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4467505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4467901Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4468338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4468787Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4468951Z 2025-09-07T08:17:10.4469030Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4469265Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4469619Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4469946Z return mod(**inputs) 2025-09-07T08:17:10.4470268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4470623Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4470996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4471368Z outputs = self.model.decoder( 2025-09-07T08:17:10.4471703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4472063Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4472451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4472824Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4473169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4473523Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4473900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4474295Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4474445Z 2025-09-07T08:17:10.4474556Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4474917Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4475235Z return mod(**inputs) 2025-09-07T08:17:10.4475599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4475951Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4476333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4476718Z outputs = self.model.decoder( 2025-09-07T08:17:10.4477062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4477445Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4477824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4478208Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4478553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4478923Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4479309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4479713Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4479868Z 2025-09-07T08:17:10.4479999Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4480364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4480714Z return mod(**inputs) 2025-09-07T08:17:10.4481073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4481457Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4481854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4482259Z outputs = self.model.decoder( 2025-09-07T08:17:10.4482641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4483022Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4483456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4483853Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4484234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4484628Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4485037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 288, in forward 2025-09-07T08:17:10.4485446Z hidden_states = self.fc2(hidden_states) 2025-09-07T08:17:10.4485603Z 2025-09-07T08:17:10.4485695Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4485931Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4486167Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4486395Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4486617Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4486845Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4487170Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4487446Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4487843Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4488226Z return mod(**inputs) 2025-09-07T08:17:10.4488598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4488991Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4489391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4489801Z outputs = self.model.decoder( 2025-09-07T08:17:10.4490210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4490616Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4491020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4491419Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4491801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4492215Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4492624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4493053Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4493484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4493927Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4494390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4494888Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4495076Z 2025-09-07T08:17:10.4495181Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4495537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4495863Z return mod(**inputs) 2025-09-07T08:17:10.4496187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4496540Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4496902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4497280Z outputs = self.model.decoder( 2025-09-07T08:17:10.4497626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4497977Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4498339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4498716Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4499072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4499440Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4499819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4500217Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4500633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4501044Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4501504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4501960Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4502122Z 2025-09-07T08:17:10.4502203Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4502445Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4502804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4503131Z return mod(**inputs) 2025-09-07T08:17:10.4503454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4503821Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4504194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4504637Z outputs = self.model.decoder( 2025-09-07T08:17:10.4505012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4505389Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4505800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4506185Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4506565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4506917Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4507297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4507695Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4507848Z 2025-09-07T08:17:10.4507958Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4508313Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4508632Z return mod(**inputs) 2025-09-07T08:17:10.4508975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4509329Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4509701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4510080Z outputs = self.model.decoder( 2025-09-07T08:17:10.4510418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4510772Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4511140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4511515Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4511852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4512211Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4512585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4512980Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4513129Z 2025-09-07T08:17:10.4513218Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4513425Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4513636Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4513843Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4514048Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4514245Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4514454Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4514685Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4515041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4515355Z return mod(**inputs) 2025-09-07T08:17:10.4515684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4516034Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4516407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4516783Z outputs = self.model.decoder( 2025-09-07T08:17:10.4517121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4517472Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4517842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4518273Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4518616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4518978Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4519356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4519758Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4520167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4520567Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4521023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4521521Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4521712Z 2025-09-07T08:17:10.4521827Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4522194Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4522543Z return mod(**inputs) 2025-09-07T08:17:10.4522888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4523264Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4523665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4524052Z outputs = self.model.decoder( 2025-09-07T08:17:10.4524415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4524784Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4525177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4525572Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4525933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4526313Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4526707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4527201Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4527622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4528049Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4528531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4529024Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4529189Z 2025-09-07T08:17:10.4529286Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4529529Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4529904Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4530240Z return mod(**inputs) 2025-09-07T08:17:10.4530580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4530940Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4531317Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4531705Z outputs = self.model.decoder( 2025-09-07T08:17:10.4532058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4532466Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4532844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4533236Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4533594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4533966Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4534379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4534781Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4534946Z 2025-09-07T08:17:10.4535054Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4535422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4535757Z return mod(**inputs) 2025-09-07T08:17:10.4536099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4536456Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4536891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4537276Z outputs = self.model.decoder( 2025-09-07T08:17:10.4537623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4537962Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4538331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4538703Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4539052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4539418Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4539783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4540178Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4540336Z 2025-09-07T08:17:10.4540440Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4540798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4541124Z return mod(**inputs) 2025-09-07T08:17:10.4541447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4541808Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4542193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4542573Z outputs = self.model.decoder( 2025-09-07T08:17:10.4542919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4543279Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4543660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4544021Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4544361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4544715Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4545221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 288, in forward 2025-09-07T08:17:10.4545603Z hidden_states = self.fc2(hidden_states) 2025-09-07T08:17:10.4545738Z 2025-09-07T08:17:10.4545825Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4546031Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4546319Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4546524Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4546722Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4546916Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4547119Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4547349Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4547708Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4548048Z return mod(**inputs) 2025-09-07T08:17:10.4548399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4548863Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4549240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4549618Z outputs = self.model.decoder( 2025-09-07T08:17:10.4549963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4550317Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4550725Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4551101Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4551454Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4551812Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4552187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4552588Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4552989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4553388Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4553831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4554309Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4554498Z 2025-09-07T08:17:10.4554602Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4554957Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4555280Z return mod(**inputs) 2025-09-07T08:17:10.4555608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4555965Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4556347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4556732Z outputs = self.model.decoder( 2025-09-07T08:17:10.4557083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4557447Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4557836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4558210Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4558553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4558913Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4559286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4559685Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4560080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4560516Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4560957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4561418Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4561585Z 2025-09-07T08:17:10.4561677Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4561923Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4562300Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4562727Z return mod(**inputs) 2025-09-07T08:17:10.4563060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4563422Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4563796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4564183Z outputs = self.model.decoder( 2025-09-07T08:17:10.4564548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4564968Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4565373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4565770Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4566151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4566540Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4567001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4567455Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4567629Z 2025-09-07T08:17:10.4567741Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4568137Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4568466Z return mod(**inputs) 2025-09-07T08:17:10.4568799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4569148Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4569525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4569909Z outputs = self.model.decoder( 2025-09-07T08:17:10.4570269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4570661Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4571050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4571448Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4571827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4572226Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4572627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4573054Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4573223Z 2025-09-07T08:17:10.4573317Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4573551Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4573782Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4574000Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4574224Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4574447Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4574694Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4574958Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4575339Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4575689Z return mod(**inputs) 2025-09-07T08:17:10.4576045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4576419Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4576845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4577251Z outputs = self.model.decoder( 2025-09-07T08:17:10.4577626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4578002Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4578394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4578798Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4579175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4579584Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4579986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4580420Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4580852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4581287Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4581772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4582270Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4582467Z 2025-09-07T08:17:10.4582573Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4582936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4583273Z return mod(**inputs) 2025-09-07T08:17:10.4583612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4583968Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4584371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4584762Z outputs = self.model.decoder( 2025-09-07T08:17:10.4585139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4585517Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4585919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4586313Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4586680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4587075Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4587473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4587901Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4588325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4588749Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4589225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4589752Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4589928Z 2025-09-07T08:17:10.4590014Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4590267Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4590655Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4591003Z return mod(**inputs) 2025-09-07T08:17:10.4591371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4591735Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4592119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4592504Z outputs = self.model.decoder( 2025-09-07T08:17:10.4592849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4593218Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4593593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4593962Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4594324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4594675Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4595057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4595461Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4595615Z 2025-09-07T08:17:10.4595727Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4596081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4596411Z return mod(**inputs) 2025-09-07T08:17:10.4596754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4597115Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4597495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4597870Z outputs = self.model.decoder( 2025-09-07T08:17:10.4598221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4598581Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4598962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4599343Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4599695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4600068Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4600453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4600857Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4601013Z 2025-09-07T08:17:10.4601118Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4601483Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4601814Z return mod(**inputs) 2025-09-07T08:17:10.4602148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4602504Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4602877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4603260Z outputs = self.model.decoder( 2025-09-07T08:17:10.4603632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4604012Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4604383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4604765Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4605123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4605512Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4605899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 288, in forward 2025-09-07T08:17:10.4606284Z hidden_states = self.fc2(hidden_states) 2025-09-07T08:17:10.4606433Z 2025-09-07T08:17:10.4606516Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4606736Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4607026Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4607240Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4607456Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4607672Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4607907Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4608141Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4608510Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4608843Z return mod(**inputs) 2025-09-07T08:17:10.4609179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4609538Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4609912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4610298Z outputs = self.model.decoder( 2025-09-07T08:17:10.4610655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4611013Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4611384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4611761Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4612126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4612516Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4612911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4613389Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4613814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4614234Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4614697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4615202Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4615397Z 2025-09-07T08:17:10.4615505Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4615883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4616224Z return mod(**inputs) 2025-09-07T08:17:10.4616573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4616936Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4617329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4617748Z outputs = self.model.decoder( 2025-09-07T08:17:10.4618129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4618516Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4618904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4619293Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4619694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4620072Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4620445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4620847Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4621244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4621643Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4622086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4622553Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4622723Z 2025-09-07T08:17:10.4622806Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4623049Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4623412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4623737Z return mod(**inputs) 2025-09-07T08:17:10.4624061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4624416Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4624792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4625165Z outputs = self.model.decoder( 2025-09-07T08:17:10.4625501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4625850Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4626220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4626594Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4626941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4627291Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4627667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4628064Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4628217Z 2025-09-07T08:17:10.4628328Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4628683Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4629000Z return mod(**inputs) 2025-09-07T08:17:10.4629326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4629676Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4630053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4630412Z outputs = self.model.decoder( 2025-09-07T08:17:10.4630745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4631085Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4631445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4631848Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4632188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4632544Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4632918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4633310Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4633479Z 2025-09-07T08:17:10.4633567Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4633770Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4633975Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4634181Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4634390Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4634587Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4634806Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4635042Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4635391Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4635720Z return mod(**inputs) 2025-09-07T08:17:10.4636041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4636388Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4636751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4637107Z outputs = self.model.decoder( 2025-09-07T08:17:10.4637443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4637785Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4638148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4638510Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4638840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4639194Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4639560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4639951Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4640336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4640726Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4641166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4641646Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4641833Z 2025-09-07T08:17:10.4641949Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4642308Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4642646Z return mod(**inputs) 2025-09-07T08:17:10.4642984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4643344Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4643726Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4644104Z outputs = self.model.decoder( 2025-09-07T08:17:10.4644457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4644816Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4645344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4645795Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4646153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4646526Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4646979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4647459Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4647889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4648329Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4648802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4649307Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4649471Z 2025-09-07T08:17:10.4649566Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4649810Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4650203Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4650540Z return mod(**inputs) 2025-09-07T08:17:10.4650882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4651239Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4651622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4652005Z outputs = self.model.decoder( 2025-09-07T08:17:10.4652404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4652770Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4653141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4653529Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4653886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4654252Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4654639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4655034Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4655196Z 2025-09-07T08:17:10.4655301Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4655663Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4655996Z return mod(**inputs) 2025-09-07T08:17:10.4656326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4656686Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4657069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4657450Z outputs = self.model.decoder( 2025-09-07T08:17:10.4657800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4658148Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4658531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4658908Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4659255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4659638Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4660007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4660399Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4660551Z 2025-09-07T08:17:10.4660663Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4661019Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4661339Z return mod(**inputs) 2025-09-07T08:17:10.4661694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4662044Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4662413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4662786Z outputs = self.model.decoder( 2025-09-07T08:17:10.4663127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4663492Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4663922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4664307Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4664665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4665060Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4665467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 288, in forward 2025-09-07T08:17:10.4665866Z hidden_states = self.fc2(hidden_states) 2025-09-07T08:17:10.4666008Z 2025-09-07T08:17:10.4666099Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4666311Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4666529Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4666746Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4666951Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4667153Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4667359Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4667590Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4667953Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4668275Z return mod(**inputs) 2025-09-07T08:17:10.4668613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4668974Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4669355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4669770Z outputs = self.model.decoder( 2025-09-07T08:17:10.4670107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4670462Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4670848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4671221Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4671562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4671925Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4672301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4672709Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4673096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4673556Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4674010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4674500Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4674687Z 2025-09-07T08:17:10.4674803Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4675170Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4675512Z return mod(**inputs) 2025-09-07T08:17:10.4675860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4676215Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4676598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4676979Z outputs = self.model.decoder( 2025-09-07T08:17:10.4677333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4677689Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4678093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4678477Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4678826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4679192Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4679574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4679981Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4680386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4680790Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4681265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4681757Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4681932Z 2025-09-07T08:17:10.4682028Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4682280Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4682666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4682998Z return mod(**inputs) 2025-09-07T08:17:10.4683333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4683696Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4684070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4684466Z outputs = self.model.decoder( 2025-09-07T08:17:10.4684837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4685225Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4685613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4686026Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4686404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4686784Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4687258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4687708Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4687981Z 2025-09-07T08:17:10.4688093Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4688490Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4688839Z return mod(**inputs) 2025-09-07T08:17:10.4689197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4689556Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4689959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4690349Z outputs = self.model.decoder( 2025-09-07T08:17:10.4690722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4691096Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4691506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4691919Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4692297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4692707Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4693117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4693556Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4693728Z 2025-09-07T08:17:10.4693816Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4694045Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4694269Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4694496Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4694722Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4694946Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4695165Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4695418Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4695817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4696183Z return mod(**inputs) 2025-09-07T08:17:10.4696539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4696912Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4697318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4697728Z outputs = self.model.decoder( 2025-09-07T08:17:10.4698104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4698487Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4698891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4699298Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4699677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4700071Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4700471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4700920Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4701352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4701782Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4702243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:10.4702735Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:10.4702943Z 2025-09-07T08:17:10.4703047Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4703405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4703729Z return mod(**inputs) 2025-09-07T08:17:10.4704060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4704402Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4704789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4705163Z outputs = self.model.decoder( 2025-09-07T08:17:10.4705506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4705851Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4706232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4706618Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4706988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4707346Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4707724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 259, in forward 2025-09-07T08:17:10.4708111Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:10.4708492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 184, in forward 2025-09-07T08:17:10.4708880Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:10.4709299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:10.4709739Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:10.4709903Z 2025-09-07T08:17:10.4709981Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4710213Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4710564Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4710883Z return mod(**inputs) 2025-09-07T08:17:10.4711216Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4711580Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4711961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4712345Z outputs = self.model.decoder( 2025-09-07T08:17:10.4712696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4713060Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4713426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4713789Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4714122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4714474Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4714842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4715229Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4715377Z 2025-09-07T08:17:10.4715486Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4715826Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4716168Z return mod(**inputs) 2025-09-07T08:17:10.4716502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4716843Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4717201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4717574Z outputs = self.model.decoder( 2025-09-07T08:17:10.4717918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4718288Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4718659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4719012Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4719350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4719712Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4720082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 286, in forward 2025-09-07T08:17:10.4720475Z hidden_states = self.activation_fn(hidden_states) 2025-09-07T08:17:10.4720644Z 2025-09-07T08:17:10.4720749Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4736856Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4737226Z return mod(**inputs) 2025-09-07T08:17:10.4737614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4737985Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4738382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 826, in forward 2025-09-07T08:17:10.4738774Z outputs = self.model.decoder( 2025-09-07T08:17:10.4739136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4739492Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4739877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 653, in forward 2025-09-07T08:17:10.4740265Z layer_outputs = decoder_layer( 2025-09-07T08:17:10.4740489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:10.4740583Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:10.4740822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 288, in forward 2025-09-07T08:17:10.4740917Z hidden_states = self.fc2(hidden_states) 2025-09-07T08:17:10.4740923Z 2025-09-07T08:17:10.4741007Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4741090Z cudagraph partition due to non gpu ops 2025-09-07T08:17:10.4741214Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:10.4741421Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:10.4741501Z return mod(**inputs) 2025-09-07T08:17:10.4741728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/generic.py", line 961, in wrapper 2025-09-07T08:17:10.4741814Z output = func(self, *args, **kwargs) 2025-09-07T08:17:10.4742061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 847, in forward 2025-09-07T08:17:10.4742141Z loss = self.loss_function( 2025-09-07T08:17:10.4742397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-09-07T08:17:10.4742582Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-09-07T08:17:10.4742858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-09-07T08:17:10.4743181Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-09-07T08:17:10.4743185Z 2025-09-07T08:17:19.4189812Z Compilation time (from dynamo_timed): 39.002366334 2025-09-07T08:17:19.4598370Z pass 2025-09-07T08:17:19.4598831Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:17:19.4601177Z TIMING: _recursive_pre_grad_passes:0.03864 _recursive_joint_graph_passes:0.36147 _recursive_post_grad_passes:0.08738 linear_unary_template_precompiling:5.47751 linear_unary_template_autotuning:3.66083 async_compile.wait:0.841 code_gen:7.9203 inductor_compile:31.88008 backend_compile:36.73741 gc:0.00079 entire_frame_compile:39.00237 total_wall_time:39.00237 2025-09-07T08:17:19.4602343Z STATS: call_* op count: 417 | FakeTensorMode.__torch_dispatch__:26228 | FakeTensor.__torch_dispatch__:3151 | ProxyTorchDispatchMode.__torch_dispatch__:7311 2025-09-07T08:17:19.4602915Z Dynamo produced 1 graphs covering 417 ops with 0 graph breaks (0 unique) 2025-09-07T08:17:22.3768701Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:17:22.3769727Z import pynvml # type: ignore[import] 2025-09-07T08:17:25.0584714Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:17:25.0585614Z from pkg_resources import resource_filename 2025-09-07T08:17:25.7275718Z 2025-09-07T08:17:26.8767139Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:17:26.8767503Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:17:26.8767764Z cpu eval PLBartForCausalLM 2025-09-07T08:17:27.5473460Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:17:27.7391391Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:17:27.9317270Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:17:46.8545981Z Autotune Choices Stats: 2025-09-07T08:17:46.8546408Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_36", "best_time": 5.622127499918861} 2025-09-07T08:17:46.8561120Z AUTOTUNE linear_unary(1024x768, 50005x768) 2025-09-07T08:17:46.8561645Z strides: [768, 1], [1, 0] 2025-09-07T08:17:46.8561927Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:17:46.8562246Z cpp_CppMicroGemmAMX_36 5.6221 ms 100.0% 2025-09-07T08:17:46.8562595Z _linear_pointwise 13.6669 ms 41.1% 2025-09-07T08:17:46.8563121Z SingleProcess AUTOTUNE benchmarking takes 1.3760 seconds and 1.3727 seconds precompiling for 2 choices 2025-09-07T08:17:47.2196785Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2197313Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2198246Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2198702Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2198973Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2199326Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2199579Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2199804Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2200109Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2200711Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2200989Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2201229Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2202066Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2202349Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2202803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2203220Z return mod(**inputs) 2025-09-07T08:17:47.2203718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2204167Z outputs = self.model.decoder( 2025-09-07T08:17:47.2204684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2205130Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2205550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2205964Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2206427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2207117Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2208036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2208826Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2209637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:47.2210333Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:47.2210553Z 2025-09-07T08:17:47.2210675Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2211076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2211444Z return mod(**inputs) 2025-09-07T08:17:47.2211923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2212374Z outputs = self.model.decoder( 2025-09-07T08:17:47.2212819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2213257Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2213651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2214071Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2214482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2214908Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2215336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2215768Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2216221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:47.2216690Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:47.2216858Z 2025-09-07T08:17:47.2216941Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2217159Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2217373Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2217585Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2217789Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2218001Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2218212Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2218421Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2218623Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2218924Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2219173Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2219418Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2219786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2220164Z return mod(**inputs) 2025-09-07T08:17:47.2220547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2220957Z outputs = self.model.decoder( 2025-09-07T08:17:47.2221387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2221785Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2222149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2222522Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2222934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2223361Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2223830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2224262Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2224688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:47.2225145Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:47.2225322Z 2025-09-07T08:17:47.2225431Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2225773Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2226091Z return mod(**inputs) 2025-09-07T08:17:47.2226450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2226833Z outputs = self.model.decoder( 2025-09-07T08:17:47.2227203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2227583Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2227926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2228276Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2228660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2229052Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2229447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2229851Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2230280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:47.2230732Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:47.2230891Z 2025-09-07T08:17:47.2230972Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2231183Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2231393Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2231595Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2231791Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2231993Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2232195Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2232394Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2232588Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2232839Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2233044Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2233275Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2233628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2233953Z return mod(**inputs) 2025-09-07T08:17:47.2234327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2234749Z outputs = self.model.decoder( 2025-09-07T08:17:47.2235131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2235515Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2235866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2236219Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2236607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2237003Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2237417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2237822Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2238255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:47.2238718Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:47.2238897Z 2025-09-07T08:17:47.2238999Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2239347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2239665Z return mod(**inputs) 2025-09-07T08:17:47.2240023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2240405Z outputs = self.model.decoder( 2025-09-07T08:17:47.2240774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2241153Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2241494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2241846Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2242234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2242647Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2243057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2243475Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2243913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:47.2244376Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:47.2244550Z 2025-09-07T08:17:47.2244634Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2244853Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2245385Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2245610Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2245812Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2246037Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2246260Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2246481Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2246754Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2247094Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2247323Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2247577Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2247963Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2248320Z return mod(**inputs) 2025-09-07T08:17:47.2248710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2249101Z outputs = self.model.decoder( 2025-09-07T08:17:47.2249477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2249867Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2250213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2250577Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2250967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2251392Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2251791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2252191Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2252619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:47.2253078Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:47.2253254Z 2025-09-07T08:17:47.2253355Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2253706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2254025Z return mod(**inputs) 2025-09-07T08:17:47.2254382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2254792Z outputs = self.model.decoder( 2025-09-07T08:17:47.2255171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2255554Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2255894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2256243Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2256620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2257034Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2257451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2257852Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2258279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:47.2258710Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:47.2258873Z 2025-09-07T08:17:47.2258951Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2259159Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2259361Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2259555Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2259758Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2259959Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2260158Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2260353Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2260613Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2260814Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2261025Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2261244Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2261595Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2261908Z return mod(**inputs) 2025-09-07T08:17:47.2262292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2262685Z outputs = self.model.decoder( 2025-09-07T08:17:47.2263062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2263455Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2263803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2264169Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2264546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2265010Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2265412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2265817Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2266252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:47.2266715Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:47.2266902Z 2025-09-07T08:17:47.2267007Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2267364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2267690Z return mod(**inputs) 2025-09-07T08:17:47.2268052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2268439Z outputs = self.model.decoder( 2025-09-07T08:17:47.2268824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2269218Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2269563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2269919Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2270302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2270770Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2271187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2271597Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2272029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:47.2272476Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:47.2272644Z 2025-09-07T08:17:47.2272726Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2272940Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2273153Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2273352Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2273559Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2273768Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2273972Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2274210Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2274413Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2274617Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2274823Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2275050Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2275410Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2275730Z return mod(**inputs) 2025-09-07T08:17:47.2276110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2276501Z outputs = self.model.decoder( 2025-09-07T08:17:47.2276874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2277267Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2277614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2277974Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2278375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2278784Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2279195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2279613Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2280069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:17:47.2280522Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:17:47.2280713Z 2025-09-07T08:17:47.2280820Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2281185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2281511Z return mod(**inputs) 2025-09-07T08:17:47.2281884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1678, in forward 2025-09-07T08:17:47.2282282Z outputs = self.model.decoder( 2025-09-07T08:17:47.2282677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:17:47.2283093Z layer_outputs = decoder_layer( 2025-09-07T08:17:47.2283442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:17:47.2283796Z return super().__call__(*args, **kwargs) 2025-09-07T08:17:47.2284212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:17:47.2284636Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:17:47.2285060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:17:47.2285478Z attn_output, attn_weights = attention_interface( 2025-09-07T08:17:47.2285922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:17:47.2286410Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:17:47.2286591Z 2025-09-07T08:17:47.2286680Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2286988Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2287223Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2287454Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2287685Z cudagraph partition due to non gpu ops 2025-09-07T08:17:47.2287955Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:17:47.2288338Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:17:47.2288678Z return mod(**inputs) 2025-09-07T08:17:47.2289049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1700, in forward 2025-09-07T08:17:47.2289555Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:17:47.2289758Z 2025-09-07T08:17:52.2077872Z Compilation time (from dynamo_timed): 22.695578204 2025-09-07T08:17:52.2304346Z pass 2025-09-07T08:17:52.2307679Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:17:52.2308817Z TIMING: _recursive_pre_grad_passes:0.02144 _recursive_joint_graph_passes:0.25746 _recursive_post_grad_passes:0.04735 linear_unary_template_precompiling:1.37901 linear_unary_template_autotuning:1.37375 async_compile.wait:0.81664 code_gen:4.3972 inductor_compile:18.54435 backend_compile:21.40199 gc:0.00149 entire_frame_compile:22.69558 total_wall_time:22.69558 2025-09-07T08:17:52.2310012Z STATS: call_* op count: 200 | FakeTensorMode.__torch_dispatch__:14572 | FakeTensor.__torch_dispatch__:1669 | ProxyTorchDispatchMode.__torch_dispatch__:3913 2025-09-07T08:17:52.2310691Z Dynamo produced 1 graphs covering 200 ops with 0 graph breaks (0 unique) 2025-09-07T08:17:54.8979983Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:17:54.8980893Z import pynvml # type: ignore[import] 2025-09-07T08:17:57.6237806Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:17:57.6238936Z from pkg_resources import resource_filename 2025-09-07T08:17:58.2767728Z 2025-09-07T08:18:00.4333329Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:18:00.4333660Z loading model: 0it [00:02, ?it/s] 2025-09-07T08:18:00.4333959Z cpu eval PLBartForConditionalGeneration 2025-09-07T08:18:01.6286600Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:18:01.9951807Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:18:02.3085275Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:18:29.9952064Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:29.9953635Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:29.9961565Z return mod(**inputs) 2025-09-07T08:18:29.9962808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1357, in forward 2025-09-07T08:18:29.9963419Z decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) 2025-09-07T08:18:29.9964057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1084, in shift_tokens_right 2025-09-07T08:18:29.9964634Z index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) 2025-09-07T08:18:29.9964874Z 2025-09-07T08:18:29.9964978Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9965226Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9965458Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9965694Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9965916Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9966143Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9966364Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9967205Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9967441Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9967671Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9967896Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9968128Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9968352Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9968612Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:29.9969076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:29.9969449Z return mod(**inputs) 2025-09-07T08:18:29.9969874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:29.9970312Z outputs = self.model( 2025-09-07T08:18:29.9970717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:29.9971133Z encoder_outputs = self.encoder( 2025-09-07T08:18:29.9971538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:29.9972025Z layer_outputs = encoder_layer( 2025-09-07T08:18:29.9972406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:29.9972808Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:29.9973245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:29.9973697Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:29.9974109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:29.9974543Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:29.9975007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:29.9975511Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:29.9975714Z 2025-09-07T08:18:29.9975842Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:29.9976231Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:29.9976575Z return mod(**inputs) 2025-09-07T08:18:29.9976974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:29.9977377Z outputs = self.model( 2025-09-07T08:18:29.9977754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:29.9978152Z encoder_outputs = self.encoder( 2025-09-07T08:18:29.9978550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:29.9978952Z layer_outputs = encoder_layer( 2025-09-07T08:18:29.9979313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:29.9979680Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:29.9980081Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:29.9980499Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:29.9980913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:29.9981333Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:29.9981781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:29.9982295Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:29.9982479Z 2025-09-07T08:18:29.9982569Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9982801Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9983034Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9983238Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9983565Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9983777Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9984002Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9984206Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9984414Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9984622Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9984834Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9985071Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:29.9985437Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:29.9985774Z return mod(**inputs) 2025-09-07T08:18:29.9986157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:29.9986576Z outputs = self.model( 2025-09-07T08:18:29.9986952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:29.9987352Z encoder_outputs = self.encoder( 2025-09-07T08:18:29.9987748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:29.9988143Z layer_outputs = encoder_layer( 2025-09-07T08:18:29.9988495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:29.9988867Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:29.9989279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:29.9989723Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:29.9990162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:29.9990600Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:29.9991052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:29.9991535Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:29.9991731Z 2025-09-07T08:18:29.9991853Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:29.9992239Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:29.9992582Z return mod(**inputs) 2025-09-07T08:18:29.9992984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:29.9993399Z outputs = self.model( 2025-09-07T08:18:29.9993795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:29.9994210Z encoder_outputs = self.encoder( 2025-09-07T08:18:29.9994623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:29.9995045Z layer_outputs = encoder_layer( 2025-09-07T08:18:29.9995421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:29.9995810Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:29.9996228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:29.9996710Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:29.9997152Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:29.9997599Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:29.9998076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:29.9998561Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:29.9998765Z 2025-09-07T08:18:29.9998856Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9999094Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9999324Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9999539Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9999761Z cudagraph partition due to non gpu ops 2025-09-07T08:18:29.9999983Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0000209Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0000423Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0000649Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0000871Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0001118Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0001349Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0001722Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0002059Z return mod(**inputs) 2025-09-07T08:18:30.0002473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0002902Z outputs = self.model( 2025-09-07T08:18:30.0003395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0003840Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0004263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0004697Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0005071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0005462Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0005922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0006386Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0006842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0007387Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0007886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0008422Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0008627Z 2025-09-07T08:18:30.0008751Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0009147Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0009502Z return mod(**inputs) 2025-09-07T08:18:30.0009881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0010284Z outputs = self.model( 2025-09-07T08:18:30.0010662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0011055Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0011449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0011883Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0012239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0012605Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0013007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0013422Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0013872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0014290Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0014735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0015198Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0015377Z 2025-09-07T08:18:30.0015461Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0015682Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0015905Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0016103Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0016326Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0016550Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0016752Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0016945Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0017147Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0017348Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0017547Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0017768Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0018123Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0018444Z return mod(**inputs) 2025-09-07T08:18:30.0018807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0019177Z outputs = self.model( 2025-09-07T08:18:30.0019535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0019919Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0020293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0020668Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0020999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0021348Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0021731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0022125Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0022513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0022912Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0023349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0023819Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0023999Z 2025-09-07T08:18:30.0024121Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0024462Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0024776Z return mod(**inputs) 2025-09-07T08:18:30.0025133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0025542Z outputs = self.model( 2025-09-07T08:18:30.0025894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0026269Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0026642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0027017Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0027372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0027722Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0028104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0028506Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0028905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0029312Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0029750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0030190Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0030351Z 2025-09-07T08:18:30.0030430Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0030642Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0030846Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0031041Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0031246Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0031455Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0031661Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0031858Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0032067Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0032272Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0032485Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0032707Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0033060Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0033379Z return mod(**inputs) 2025-09-07T08:18:30.0033740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0034112Z outputs = self.model( 2025-09-07T08:18:30.0034473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0034854Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0035227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0035607Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0035940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0036294Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0036675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0037123Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0037520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0037911Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0038338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0038793Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0038999Z 2025-09-07T08:18:30.0039110Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0039467Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0039776Z return mod(**inputs) 2025-09-07T08:18:30.0040135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0040512Z outputs = self.model( 2025-09-07T08:18:30.0040887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0041273Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0041657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0042045Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0042394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0042751Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0043151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0043554Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0043955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0044371Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0044839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0045666Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0045852Z 2025-09-07T08:18:30.0045941Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0046182Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0046412Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0046629Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0046852Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0047153Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0047395Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0047618Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0047853Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0048082Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0048289Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0048521Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0048892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0049217Z return mod(**inputs) 2025-09-07T08:18:30.0049591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0049982Z outputs = self.model( 2025-09-07T08:18:30.0050337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0050725Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0051106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0051493Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0051834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0052195Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0052586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0052987Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0053468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0053875Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0054320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0054792Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0054972Z 2025-09-07T08:18:30.0055112Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0055471Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0055787Z return mod(**inputs) 2025-09-07T08:18:30.0056160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0056549Z outputs = self.model( 2025-09-07T08:18:30.0056916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1189, in forward 2025-09-07T08:18:30.0057302Z encoder_outputs = self.encoder( 2025-09-07T08:18:30.0057709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 669, in forward 2025-09-07T08:18:30.0058096Z layer_outputs = encoder_layer( 2025-09-07T08:18:30.0058440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0058808Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0059183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 496, in forward 2025-09-07T08:18:30.0059577Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:18:30.0059971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0060372Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0060807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0061240Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0061402Z 2025-09-07T08:18:30.0061482Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0061689Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0061893Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0062088Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0062294Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0062500Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0062703Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0062900Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0063104Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0063309Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0063518Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0063751Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0064133Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0064458Z return mod(**inputs) 2025-09-07T08:18:30.0064833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0065220Z outputs = self.model( 2025-09-07T08:18:30.0065588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0065974Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0066349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0066740Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0067121Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0067475Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0067865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0068285Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0068709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0069112Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0069550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0070021Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0070203Z 2025-09-07T08:18:30.0070323Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0070679Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0070988Z return mod(**inputs) 2025-09-07T08:18:30.0071362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0071748Z outputs = self.model( 2025-09-07T08:18:30.0072112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0072493Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0072861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0073239Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0073579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0073936Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0074322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0074731Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0075132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0075537Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0075966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0076401Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0076565Z 2025-09-07T08:18:30.0076645Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0076853Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0077060Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0077257Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0077463Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0077673Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0077880Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0078088Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0078318Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0078672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0078994Z return mod(**inputs) 2025-09-07T08:18:30.0079372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0079761Z outputs = self.model( 2025-09-07T08:18:30.0080134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0081162Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0081560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0081956Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0082319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0082690Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0083119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0083554Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0083981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0084405Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0084865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0085380Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0085583Z 2025-09-07T08:18:30.0085725Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0086120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0086485Z return mod(**inputs) 2025-09-07T08:18:30.0086978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0087445Z outputs = self.model( 2025-09-07T08:18:30.0087873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0088290Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0088676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0089074Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0089432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0089781Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0090169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0090585Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0090991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0091403Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0091831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0092285Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0092455Z 2025-09-07T08:18:30.0092537Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0092755Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0092977Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0093176Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0093378Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0093581Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0093783Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0093983Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0094188Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0094394Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0094596Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0094824Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0095182Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0095549Z return mod(**inputs) 2025-09-07T08:18:30.0095921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0096309Z outputs = self.model( 2025-09-07T08:18:30.0096679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0097075Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0097482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0097883Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0098234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0098602Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0099003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0099430Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0099851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0100263Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0100705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0101179Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0101360Z 2025-09-07T08:18:30.0101474Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0101828Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0102152Z return mod(**inputs) 2025-09-07T08:18:30.0102524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0102915Z outputs = self.model( 2025-09-07T08:18:30.0103284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0103674Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0104059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0104451Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0104805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0105163Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0105555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0105972Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0106384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0106794Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0107232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0107684Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0107851Z 2025-09-07T08:18:30.0107934Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0108149Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0108361Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0108561Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0108768Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0108979Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0109201Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0109412Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0109645Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0110007Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0110324Z return mod(**inputs) 2025-09-07T08:18:30.0110679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0111058Z outputs = self.model( 2025-09-07T08:18:30.0111433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0111831Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0112223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0112607Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0112968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0113326Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0113769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0114186Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0114600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0115007Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0115437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0115896Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0116071Z 2025-09-07T08:18:30.0116174Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0116525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0116843Z return mod(**inputs) 2025-09-07T08:18:30.0117218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0117623Z outputs = self.model( 2025-09-07T08:18:30.0117974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0118360Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0118737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0119116Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0119449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0119805Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0120192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0120606Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0121016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0121414Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0121844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0122281Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0122433Z 2025-09-07T08:18:30.0122519Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0122728Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0122954Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0123173Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0123374Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0123570Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0123766Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0123973Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0124176Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0124386Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0124585Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0124837Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0125205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0125541Z return mod(**inputs) 2025-09-07T08:18:30.0125914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0126318Z outputs = self.model( 2025-09-07T08:18:30.0126699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0127197Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0127706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0128122Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0128495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0128857Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0129253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0129667Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0130068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0130482Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0130924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0131400Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0131585Z 2025-09-07T08:18:30.0131699Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0132057Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0132376Z return mod(**inputs) 2025-09-07T08:18:30.0132742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0133127Z outputs = self.model( 2025-09-07T08:18:30.0133484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0133876Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0134255Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0134645Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0134995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0135351Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0135746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0136160Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0136564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0136968Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0137437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0137889Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0138058Z 2025-09-07T08:18:30.0138145Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0138359Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0138560Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0138765Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0138984Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0139194Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0139388Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0139593Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0139829Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0140189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0140507Z return mod(**inputs) 2025-09-07T08:18:30.0140876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0141260Z outputs = self.model( 2025-09-07T08:18:30.0141641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0142032Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0142408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0142798Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0143139Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0143493Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0143879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0144290Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0144706Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0145245Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0145679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0146138Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0146330Z 2025-09-07T08:18:30.0146438Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0146808Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0147128Z return mod(**inputs) 2025-09-07T08:18:30.0147488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0147863Z outputs = self.model( 2025-09-07T08:18:30.0148230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0148615Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0148992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0149378Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0149713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0150065Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0150449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0150916Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0151347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0151764Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0152211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0152669Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0152836Z 2025-09-07T08:18:30.0152952Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0153170Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0153386Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0153592Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0153793Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0153990Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0154194Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0154395Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0154599Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0154793Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0154998Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0155257Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0155621Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0155946Z return mod(**inputs) 2025-09-07T08:18:30.0156313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0156708Z outputs = self.model( 2025-09-07T08:18:30.0157091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0157480Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0157870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0158269Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0158632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0159006Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0159412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0159831Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0160257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0160682Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0161143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0161633Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0161820Z 2025-09-07T08:18:30.0161935Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0162335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0162691Z return mod(**inputs) 2025-09-07T08:18:30.0163094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0163511Z outputs = self.model( 2025-09-07T08:18:30.0163911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0164337Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0164756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0165225Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0165596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0165992Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0166423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0166884Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0167435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0167890Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0168368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0168846Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0169013Z 2025-09-07T08:18:30.0169106Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0169317Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0169537Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0169756Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0169995Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0170210Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0170424Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0170647Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0170914Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0171284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0171611Z return mod(**inputs) 2025-09-07T08:18:30.0171995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0172394Z outputs = self.model( 2025-09-07T08:18:30.0172778Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0173174Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0173569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0173974Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0174333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0174705Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0175106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0175542Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0175985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0176398Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0176836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0177300Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0177487Z 2025-09-07T08:18:30.0177594Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0177954Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0178277Z return mod(**inputs) 2025-09-07T08:18:30.0178637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0179029Z outputs = self.model( 2025-09-07T08:18:30.0179402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0179841Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0180246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0180631Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0180980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0181345Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0181758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0182194Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0182629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0183043Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0183490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0183944Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0184104Z 2025-09-07T08:18:30.0184210Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0184421Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0184631Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0184841Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0185047Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0185247Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0185453Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0185660Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0185868Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0186063Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0186270Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0186513Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0186873Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0187191Z return mod(**inputs) 2025-09-07T08:18:30.0187564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0187951Z outputs = self.model( 2025-09-07T08:18:30.0188331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0188722Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0189101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0189487Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0189839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0190206Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0190593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0191014Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0191448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0191870Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0192315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0192768Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0192952Z 2025-09-07T08:18:30.0193055Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0193444Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0193777Z return mod(**inputs) 2025-09-07T08:18:30.0194127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0194489Z outputs = self.model( 2025-09-07T08:18:30.0194837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0195218Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0195610Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0195984Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0196335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0196683Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0197059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0197450Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0197851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0198248Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0198665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0199094Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0199247Z 2025-09-07T08:18:30.0199333Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0199535Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0199740Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0199943Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0200146Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0200337Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0200542Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0200750Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0200986Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0201337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0201658Z return mod(**inputs) 2025-09-07T08:18:30.0202028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0202417Z outputs = self.model( 2025-09-07T08:18:30.0202784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0203172Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0203559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0203948Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0204298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0204651Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0205052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0205488Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0205921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0206347Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0206791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0207415Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0207631Z 2025-09-07T08:18:30.0207748Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0208164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0208536Z return mod(**inputs) 2025-09-07T08:18:30.0208908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0209322Z outputs = self.model( 2025-09-07T08:18:30.0209693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0210085Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0210463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0210852Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0211198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0211556Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0211963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0212378Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0212803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0213212Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0213651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0214100Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0214261Z 2025-09-07T08:18:30.0214390Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0214597Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0214809Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0215015Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0215221Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0215419Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0215624Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0215865Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0216071Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0216267Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0216477Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0216718Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0217076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0217397Z return mod(**inputs) 2025-09-07T08:18:30.0217762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0218148Z outputs = self.model( 2025-09-07T08:18:30.0218515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0218900Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0219283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0219659Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0219997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0220348Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0220728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0221165Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0221566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0221968Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0222391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0222877Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0223054Z 2025-09-07T08:18:30.0223155Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0223505Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0223819Z return mod(**inputs) 2025-09-07T08:18:30.0224179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0224547Z outputs = self.model( 2025-09-07T08:18:30.0224905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0225286Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0225675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0226057Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0226394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0226745Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0227131Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 760, in forward 2025-09-07T08:18:30.0227540Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:18:30.0227944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0228341Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0228776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0229217Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0229370Z 2025-09-07T08:18:30.0229456Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0229657Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0229863Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0230066Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0230270Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0230463Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0230663Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0230861Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0231091Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0231443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0231755Z return mod(**inputs) 2025-09-07T08:18:30.0232118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0232495Z outputs = self.model( 2025-09-07T08:18:30.0232856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0233230Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0233603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0233979Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0234319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0234711Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0235090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0235510Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0235921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0236343Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0236768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:18:30.0237220Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:18:30.0237415Z 2025-09-07T08:18:30.0237515Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0237854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0238159Z return mod(**inputs) 2025-09-07T08:18:30.0238499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1359, in forward 2025-09-07T08:18:30.0238882Z outputs = self.model( 2025-09-07T08:18:30.0239239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1207, in forward 2025-09-07T08:18:30.0239618Z decoder_outputs = self.decoder( 2025-09-07T08:18:30.0239997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1031, in forward 2025-09-07T08:18:30.0240371Z layer_outputs = decoder_layer( 2025-09-07T08:18:30.0240708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:18:30.0241076Z return super().__call__(*args, **kwargs) 2025-09-07T08:18:30.0241500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 777, in forward 2025-09-07T08:18:30.0241925Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:18:30.0242349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 438, in forward 2025-09-07T08:18:30.0242774Z attn_output, attn_weights = attention_interface( 2025-09-07T08:18:30.0243231Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:18:30.0243695Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:18:30.0243858Z 2025-09-07T08:18:30.0243949Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0244163Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0244379Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0244592Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0244803Z cudagraph partition due to non gpu ops 2025-09-07T08:18:30.0245164Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:18:30.0245554Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:18:30.0245921Z return mod(**inputs) 2025-09-07T08:18:30.0246335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/plbart/modeling_plbart.py", line 1383, in forward 2025-09-07T08:18:30.0246875Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:18:30.0247177Z 2025-09-07T08:18:39.9681389Z Compilation time (from dynamo_timed): 35.918873826 2025-09-07T08:18:39.9902922Z pass 2025-09-07T08:18:39.9907933Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:18:39.9913719Z TIMING: _recursive_pre_grad_passes:0.04873 _recursive_joint_graph_passes:0.47274 _recursive_post_grad_passes:0.09483 linear_unary_template_precompiling:0.01742 async_compile.wait:0.80272 code_gen:9.58247 inductor_compile:26.09334 backend_compile:32.98542 gc:0.00258 entire_frame_compile:35.91887 total_wall_time:35.91887 2025-09-07T08:18:39.9915793Z STATS: call_* op count: 519 | FakeTensorMode.__torch_dispatch__:36248 | FakeTensor.__torch_dispatch__:4018 | ProxyTorchDispatchMode.__torch_dispatch__:9868 2025-09-07T08:18:39.9919453Z Dynamo produced 1 graphs covering 519 ops with 0 graph breaks (0 unique) 2025-09-07T08:18:42.7986755Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:18:42.7987744Z import pynvml # type: ignore[import] 2025-09-07T08:18:45.4991018Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:18:45.4992214Z from pkg_resources import resource_filename 2025-09-07T08:18:46.1606685Z 2025-09-07T08:18:49.5440693Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:18:49.5444266Z loading model: 0it [00:03, ?it/s] 2025-09-07T08:18:49.5444674Z cpu eval PegasusForCausalLM 2025-09-07T08:18:49.9189758Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:18:50.0799061Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:18:50.2173473Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:19:09.4857256Z Autotune Choices Stats: 2025-09-07T08:19:09.4857814Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.12085550019946822} 2025-09-07T08:19:09.4869631Z AUTOTUNE linear_unary(128x1024, 4096x1024, 4096) 2025-09-07T08:19:09.4870701Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:19:09.4871080Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:19:09.4871407Z cpp_CppMicroGemmAMX_4 0.1209 ms 100.0% 2025-09-07T08:19:09.4871664Z _linear_pointwise 0.1274 ms 94.8% 2025-09-07T08:19:09.4872055Z SingleProcess AUTOTUNE benchmarking takes 0.2806 seconds and 1.5352 seconds precompiling for 2 choices 2025-09-07T08:19:16.9041946Z Autotune Choices Stats: 2025-09-07T08:19:16.9042635Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_72", "best_time": 0.9421845002179907} 2025-09-07T08:19:16.9057025Z AUTOTUNE linear_unary(128x1024, 50265x1024) 2025-09-07T08:19:16.9057349Z strides: [1024, 1], [1, 0] 2025-09-07T08:19:16.9059806Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:19:16.9060097Z cpp_CppMicroGemmAMX_72 0.9422 ms 100.0% 2025-09-07T08:19:16.9060367Z _linear_pointwise 1.2204 ms 77.2% 2025-09-07T08:19:16.9060750Z SingleProcess AUTOTUNE benchmarking takes 0.4604 seconds and 1.4159 seconds precompiling for 2 choices 2025-09-07T08:19:17.5936958Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5937854Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5938178Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5938402Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5938621Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5938847Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5939054Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5939268Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5939479Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5939693Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5939899Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5940110Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5940713Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5940932Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5941137Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5941352Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5941568Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5941780Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5941983Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5942235Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5942709Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5943063Z return mod(**inputs) 2025-09-07T08:19:17.5943491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5943938Z outputs = self.model.decoder( 2025-09-07T08:19:17.5944495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5944943Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5945556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5945963Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5946415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5946903Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5947377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5947864Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5948345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.5948860Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.5949064Z 2025-09-07T08:19:17.5949181Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5949590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5949966Z return mod(**inputs) 2025-09-07T08:19:17.5950378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5950835Z outputs = self.model.decoder( 2025-09-07T08:19:17.5951273Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5951729Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5952116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5952524Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5952970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5953455Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5953920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5954379Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5954874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.5955380Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.5955563Z 2025-09-07T08:19:17.5955666Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5955909Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5956137Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5956440Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5956664Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5956942Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5957163Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5957388Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5957607Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5957833Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5958090Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5958510Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5958863Z return mod(**inputs) 2025-09-07T08:19:17.5959278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5959717Z outputs = self.model.decoder( 2025-09-07T08:19:17.5960146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5960637Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5961016Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5961437Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5961888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5962348Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5962793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5963245Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5963732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.5964287Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.5964483Z 2025-09-07T08:19:17.5964605Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5964989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5965340Z return mod(**inputs) 2025-09-07T08:19:17.5965744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5966179Z outputs = self.model.decoder( 2025-09-07T08:19:17.5966611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5967194Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5967600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5968013Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5968473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5968898Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5969310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5969728Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5970179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.5970643Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.5970811Z 2025-09-07T08:19:17.5970903Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5971117Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5971335Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5971598Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5971811Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5972014Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5972229Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5972452Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5972658Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5972856Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5973095Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5973473Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5973802Z return mod(**inputs) 2025-09-07T08:19:17.5974178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5974587Z outputs = self.model.decoder( 2025-09-07T08:19:17.5974993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5975420Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5975807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5976215Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5976643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5977089Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5977521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5977966Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5978445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.5978965Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.5979171Z 2025-09-07T08:19:17.5979285Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5979678Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5980025Z return mod(**inputs) 2025-09-07T08:19:17.5980426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5980863Z outputs = self.model.decoder( 2025-09-07T08:19:17.5981291Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5981715Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5982091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5982485Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5982924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5983384Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5983837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5984283Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5984761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.5985241Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.5985407Z 2025-09-07T08:19:17.5985498Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5985714Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5985945Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5986208Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5986437Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5986642Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5986855Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5987067Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5987282Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.5987516Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5987925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5988261Z return mod(**inputs) 2025-09-07T08:19:17.5988650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5989175Z outputs = self.model.decoder( 2025-09-07T08:19:17.5989622Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5990066Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5990465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5990842Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5991274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5991707Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5992156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5992598Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5993059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.5993555Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.5993748Z 2025-09-07T08:19:17.5993858Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.5994233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.5994574Z return mod(**inputs) 2025-09-07T08:19:17.5994965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.5995395Z outputs = self.model.decoder( 2025-09-07T08:19:17.5995829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.5996266Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.5996655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.5997057Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.5997467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.5997909Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.5998342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.5998777Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.5999239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.5999707Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.5999883Z 2025-09-07T08:19:17.5999969Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6000190Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6000407Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6000618Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6000852Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6001081Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6001296Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6001503Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6001715Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6001929Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6002174Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6002546Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6002888Z return mod(**inputs) 2025-09-07T08:19:17.6003265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6003671Z outputs = self.model.decoder( 2025-09-07T08:19:17.6004069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6004470Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6004824Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6005196Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6005628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6006090Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6006540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6007077Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6007576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6008107Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6008314Z 2025-09-07T08:19:17.6008429Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6008802Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6009129Z return mod(**inputs) 2025-09-07T08:19:17.6009514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6009923Z outputs = self.model.decoder( 2025-09-07T08:19:17.6010318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6010736Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6011088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6011448Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6011846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6012264Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6012694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6013122Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6013573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6014037Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6014197Z 2025-09-07T08:19:17.6014278Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6014491Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6014700Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6014912Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6015137Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6015360Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6015569Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6015773Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6015975Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6016215Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6016575Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6016901Z return mod(**inputs) 2025-09-07T08:19:17.6017285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6017688Z outputs = self.model.decoder( 2025-09-07T08:19:17.6018078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6018482Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6018845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6019223Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6019710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6020131Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6020560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6020985Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6021429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6021922Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6022121Z 2025-09-07T08:19:17.6022233Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6022609Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6022952Z return mod(**inputs) 2025-09-07T08:19:17.6023332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6023738Z outputs = self.model.decoder( 2025-09-07T08:19:17.6024142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6024548Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6024900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6025274Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6025687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6026123Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6026552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6026972Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6027424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6027890Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6028053Z 2025-09-07T08:19:17.6028146Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6028361Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6028568Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6028778Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6028988Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6029201Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6029443Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6029655Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6029868Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6030082Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6030322Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6030696Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6031033Z return mod(**inputs) 2025-09-07T08:19:17.6031436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6031864Z outputs = self.model.decoder( 2025-09-07T08:19:17.6032269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6032676Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6033037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6033411Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6033831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6034270Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6034703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6035133Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6035588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6036081Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6036271Z 2025-09-07T08:19:17.6036378Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6036740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6037067Z return mod(**inputs) 2025-09-07T08:19:17.6037439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6037849Z outputs = self.model.decoder( 2025-09-07T08:19:17.6038253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6038659Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6039030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6039384Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6039787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6040209Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6040637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6041120Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6041589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6042081Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6042265Z 2025-09-07T08:19:17.6042354Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6042585Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6042806Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6043031Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6043254Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6043476Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6043749Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6044034Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6044261Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6044582Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6044982Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6045542Z return mod(**inputs) 2025-09-07T08:19:17.6046022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6046449Z outputs = self.model.decoder( 2025-09-07T08:19:17.6046874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6047386Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6047784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6048194Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6048632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6049135Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6049554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6049970Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6050402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6050884Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6051077Z 2025-09-07T08:19:17.6051183Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6051543Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6051877Z return mod(**inputs) 2025-09-07T08:19:17.6052245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6052653Z outputs = self.model.decoder( 2025-09-07T08:19:17.6053045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6053443Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6053793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6054150Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6054549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6054968Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6055391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6055807Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6056237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6056689Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6056889Z 2025-09-07T08:19:17.6056980Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6057210Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6057434Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6057665Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6057891Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6058118Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6058339Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6058642Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6058852Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6059058Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6059293Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6059672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6060000Z return mod(**inputs) 2025-09-07T08:19:17.6060394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6060793Z outputs = self.model.decoder( 2025-09-07T08:19:17.6061182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6061584Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6061943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6062310Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6062723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6063261Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6063711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6064216Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6064668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6065158Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6065355Z 2025-09-07T08:19:17.6065463Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6065833Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6066225Z return mod(**inputs) 2025-09-07T08:19:17.6066613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6067031Z outputs = self.model.decoder( 2025-09-07T08:19:17.6067433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6067848Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6068196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6068565Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6068982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6069417Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6069848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6070270Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6070724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6071188Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6071355Z 2025-09-07T08:19:17.6071448Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6071672Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6071905Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6072132Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6072359Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6072564Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6072779Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6073041Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6073253Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6073493Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6073859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6074194Z return mod(**inputs) 2025-09-07T08:19:17.6074588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6075019Z outputs = self.model.decoder( 2025-09-07T08:19:17.6075416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6075826Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6076185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6076558Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6076969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6077395Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6077841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6078271Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6078724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6079226Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6079426Z 2025-09-07T08:19:17.6079541Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6079934Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6080292Z return mod(**inputs) 2025-09-07T08:19:17.6080695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6081112Z outputs = self.model.decoder( 2025-09-07T08:19:17.6081538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6081962Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6082347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6082749Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6083181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6083643Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6084101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6084561Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6085043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6085538Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6085724Z 2025-09-07T08:19:17.6085814Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6086049Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6086282Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6086503Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6086731Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6087044Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6087283Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6087515Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6087783Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6088036Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6088299Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6088695Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6089057Z return mod(**inputs) 2025-09-07T08:19:17.6089494Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6089964Z outputs = self.model.decoder( 2025-09-07T08:19:17.6090409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6090853Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6091245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6091659Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6092108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6092587Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6093087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6093559Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6094064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6094554Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6094751Z 2025-09-07T08:19:17.6094867Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6095217Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6095554Z return mod(**inputs) 2025-09-07T08:19:17.6095931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6096333Z outputs = self.model.decoder( 2025-09-07T08:19:17.6096724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6097131Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6097496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6097854Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6098249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6098660Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6099073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6099490Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6099931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6100383Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6100545Z 2025-09-07T08:19:17.6100628Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6100844Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6101057Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6101261Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6101460Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6101669Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6101877Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6102083Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6102323Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6102577Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6102936Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6103260Z return mod(**inputs) 2025-09-07T08:19:17.6103630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6104027Z outputs = self.model.decoder( 2025-09-07T08:19:17.6104438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6104837Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6105192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6105547Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6105955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6106388Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6106834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6107257Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6107700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:19:17.6108180Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:19:17.6108374Z 2025-09-07T08:19:17.6108484Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6108858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6109185Z return mod(**inputs) 2025-09-07T08:19:17.6109562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1634, in forward 2025-09-07T08:19:17.6109959Z outputs = self.model.decoder( 2025-09-07T08:19:17.6110348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:19:17.6110738Z layer_outputs = decoder_layer( 2025-09-07T08:19:17.6111077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:19:17.6111446Z return super().__call__(*args, **kwargs) 2025-09-07T08:19:17.6111844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:19:17.6112315Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:19:17.6112716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:19:17.6113116Z attn_output, attn_weights = attention_interface( 2025-09-07T08:19:17.6113565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:19:17.6114005Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:19:17.6114164Z 2025-09-07T08:19:17.6114253Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6114465Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6114671Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6114884Z cudagraph partition due to non gpu ops 2025-09-07T08:19:17.6115116Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:19:17.6115469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:19:17.6115782Z return mod(**inputs) 2025-09-07T08:19:17.6116153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1656, in forward 2025-09-07T08:19:17.6116665Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:19:17.6116856Z 2025-09-07T08:19:26.3166894Z Compilation time (from dynamo_timed): 34.881454134 2025-09-07T08:19:26.3188104Z pass 2025-09-07T08:19:26.3188690Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:19:26.3191451Z TIMING: _recursive_pre_grad_passes:0.03702 _recursive_joint_graph_passes:0.38896 _recursive_post_grad_passes:0.07195 linear_unary_template_precompiling:2.96377 linear_unary_template_autotuning:0.7363 async_compile.wait:0.86823 code_gen:8.51929 inductor_compile:27.41808 backend_compile:32.63238 gc:0.00058 entire_frame_compile:34.88145 total_wall_time:34.88145 2025-09-07T08:19:26.3192709Z STATS: call_* op count: 371 | FakeTensorMode.__torch_dispatch__:27555 | FakeTensor.__torch_dispatch__:3031 | ProxyTorchDispatchMode.__torch_dispatch__:7479 2025-09-07T08:19:26.3193284Z Dynamo produced 1 graphs covering 371 ops with 0 graph breaks (0 unique) 2025-09-07T08:19:29.3009224Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:19:29.3010101Z import pynvml # type: ignore[import] 2025-09-07T08:19:32.0106777Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:19:32.0107741Z from pkg_resources import resource_filename 2025-09-07T08:19:32.6801612Z 2025-09-07T08:19:38.3482144Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:19:38.3482706Z loading model: 0it [00:05, ?it/s] 2025-09-07T08:19:38.3483063Z cpu eval PegasusForConditionalGeneration 2025-09-07T08:19:38.9986610Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:19:39.3468448Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:19:39.6088637Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:20:26.2674853Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2681098Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2683087Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2683368Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2683594Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2683820Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2684045Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2684266Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2684498Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2684731Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2684977Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2685211Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2685447Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2685682Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2685915Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2686159Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2686396Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2686636Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2686872Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2687291Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2687739Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2688185Z return mod(**inputs) 2025-09-07T08:20:26.2688975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2689539Z outputs = self.model( 2025-09-07T08:20:26.2689968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2690413Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2690853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2691341Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2691738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2692140Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2692604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2693080Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2693542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2694010Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2694560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2695095Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2695314Z 2025-09-07T08:20:26.2695434Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2695836Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2696209Z return mod(**inputs) 2025-09-07T08:20:26.2696620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2697072Z outputs = self.model( 2025-09-07T08:20:26.2697492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2697941Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2698369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2698771Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2699161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2699606Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2700018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2700446Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2700889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2701351Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2701822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2702296Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2702465Z 2025-09-07T08:20:26.2702551Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2702779Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2703006Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2703236Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2703455Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2703681Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2703902Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2704124Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2704382Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2704604Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2704858Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2705353Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2705687Z return mod(**inputs) 2025-09-07T08:20:26.2706068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2706491Z outputs = self.model( 2025-09-07T08:20:26.2706875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2707342Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2707774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2708205Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2708595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2708990Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2709428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2709848Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2710277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2710725Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2711210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2711732Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2711934Z 2025-09-07T08:20:26.2712049Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2712447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2712781Z return mod(**inputs) 2025-09-07T08:20:26.2713172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2713575Z outputs = self.model( 2025-09-07T08:20:26.2713953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2714359Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2714756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2715154Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2715512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2715918Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2716344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2716766Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2717186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2717607Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2718053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2718524Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2718690Z 2025-09-07T08:20:26.2718781Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2719002Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2719249Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2719471Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2719688Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2719907Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2720121Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2720337Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2720562Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2720786Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2721059Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2721452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2721803Z return mod(**inputs) 2025-09-07T08:20:26.2722209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2722635Z outputs = self.model( 2025-09-07T08:20:26.2723043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2723479Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2723945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2724384Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2724758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2725169Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2725612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2726070Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2726531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2727211Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2727722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2728251Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2728439Z 2025-09-07T08:20:26.2728556Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2728932Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2729262Z return mod(**inputs) 2025-09-07T08:20:26.2729652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2730060Z outputs = self.model( 2025-09-07T08:20:26.2730449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2730884Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2731286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2731690Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2732055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2732431Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2732840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2733265Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2733716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2734167Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2734659Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2735122Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2735305Z 2025-09-07T08:20:26.2735395Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2735623Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2735850Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2736053Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2736299Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2736511Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2736721Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2736923Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2737134Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2737372Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2737740Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2738076Z return mod(**inputs) 2025-09-07T08:20:26.2738453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2738883Z outputs = self.model( 2025-09-07T08:20:26.2739292Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2739716Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2740113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2740535Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2740917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2741287Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2741722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2742156Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2742601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2743050Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2743533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2744045Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2744241Z 2025-09-07T08:20:26.2744356Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2744757Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2745391Z return mod(**inputs) 2025-09-07T08:20:26.2745812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2746233Z outputs = self.model( 2025-09-07T08:20:26.2746644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2747071Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2747497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2747927Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2748300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2748695Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2749129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2749663Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2750104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2750545Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2751011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2751476Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2751667Z 2025-09-07T08:20:26.2751762Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2751975Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2752191Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2752405Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2752615Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2752826Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2753033Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2753244Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2753455Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2753663Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2753925Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2754290Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2754626Z return mod(**inputs) 2025-09-07T08:20:26.2755008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2755403Z outputs = self.model( 2025-09-07T08:20:26.2755770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2756160Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2756546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2756938Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2757279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2757634Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2758030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2758439Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2758837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2759248Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2759697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2760183Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2760370Z 2025-09-07T08:20:26.2760484Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2760844Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2761175Z return mod(**inputs) 2025-09-07T08:20:26.2761552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2761951Z outputs = self.model( 2025-09-07T08:20:26.2762333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2762731Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2763126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2763542Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2763918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2764281Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2764687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2765105Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2765539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2765963Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2766404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2766973Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2767158Z 2025-09-07T08:20:26.2767245Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2767464Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2767693Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2767919Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2768218Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2768431Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2768639Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2768843Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2769056Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2769294Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2769660Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2769985Z return mod(**inputs) 2025-09-07T08:20:26.2770369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2770771Z outputs = self.model( 2025-09-07T08:20:26.2771153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2771618Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2772015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2772418Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2772777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2773149Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2773553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2773973Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2774393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2774823Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2775274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2775760Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2775954Z 2025-09-07T08:20:26.2776062Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2776429Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2776766Z return mod(**inputs) 2025-09-07T08:20:26.2777146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2777540Z outputs = self.model( 2025-09-07T08:20:26.2777923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2778366Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2778767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2779161Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2779520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2779904Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2780311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2780732Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2781143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2781568Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2782013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2782452Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2782623Z 2025-09-07T08:20:26.2782712Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2782912Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2783117Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2783321Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2783523Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2783716Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2783916Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2784116Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2784316Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2784508Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2784738Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2785090Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2785404Z return mod(**inputs) 2025-09-07T08:20:26.2785761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2786144Z outputs = self.model( 2025-09-07T08:20:26.2786517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2786910Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2787293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2787675Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2788035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2788389Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2788775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2789174Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2789563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2789971Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2790400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2790859Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2791043Z 2025-09-07T08:20:26.2791159Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2791521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2791902Z return mod(**inputs) 2025-09-07T08:20:26.2792295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2792705Z outputs = self.model( 2025-09-07T08:20:26.2793089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2793491Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2793902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2794300Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2794638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2794997Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2795388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2795788Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2796210Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2796624Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2797057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2797511Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2797693Z 2025-09-07T08:20:26.2797773Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2797985Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2798189Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2798402Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2798616Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2798829Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2799038Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2799254Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2799471Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2799718Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2800094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2800423Z return mod(**inputs) 2025-09-07T08:20:26.2800799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2801192Z outputs = self.model( 2025-09-07T08:20:26.2801587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2802015Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2802441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2802865Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2803252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2803646Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2804076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2804521Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2804969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2805424Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2805895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2807632Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2807845Z 2025-09-07T08:20:26.2807973Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2808373Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2808726Z return mod(**inputs) 2025-09-07T08:20:26.2809102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2809486Z outputs = self.model( 2025-09-07T08:20:26.2809849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2810254Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2810651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2811047Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2811404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2811797Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2812182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2812582Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2812985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2813402Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2813843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2814297Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2814462Z 2025-09-07T08:20:26.2814545Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2814765Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2814980Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2815194Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2815405Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2815608Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2815818Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2816031Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2816242Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2816447Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2816685Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2817051Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2817378Z return mod(**inputs) 2025-09-07T08:20:26.2817753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2818146Z outputs = self.model( 2025-09-07T08:20:26.2818522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2818919Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2819305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2819710Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2820071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2820441Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2820851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2821329Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2821751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2822163Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2822598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2823065Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2823262Z 2025-09-07T08:20:26.2823369Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2823733Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2824066Z return mod(**inputs) 2025-09-07T08:20:26.2824443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2824842Z outputs = self.model( 2025-09-07T08:20:26.2825205Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2825603Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2826005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2826399Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2826744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2827108Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2827507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2827914Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2828324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2828730Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2829167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2829619Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2829777Z 2025-09-07T08:20:26.2829867Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2830082Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2830291Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2830498Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2830704Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2830910Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2831108Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2831313Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2831525Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2831760Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2832111Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2832437Z return mod(**inputs) 2025-09-07T08:20:26.2832807Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2833196Z outputs = self.model( 2025-09-07T08:20:26.2833563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2833955Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2834343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2834731Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2835095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2835460Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2835858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2836261Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2836669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2837096Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2837530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2837999Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2838184Z 2025-09-07T08:20:26.2838289Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2838653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2838979Z return mod(**inputs) 2025-09-07T08:20:26.2839369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2839771Z outputs = self.model( 2025-09-07T08:20:26.2840153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2840559Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2840950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2841354Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2841713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2842087Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2842497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2842911Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2843333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2843763Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2844212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2844674Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2844838Z 2025-09-07T08:20:26.2844921Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2845373Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2845607Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2845840Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2846056Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2846282Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2846506Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2846735Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2846997Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2847235Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2847489Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2847885Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2848222Z return mod(**inputs) 2025-09-07T08:20:26.2848598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2848988Z outputs = self.model( 2025-09-07T08:20:26.2849364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2849855Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2850257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2850668Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2851036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2851438Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2851853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2852253Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2852662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2853076Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2853516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2854003Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2854190Z 2025-09-07T08:20:26.2854295Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2854652Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2854976Z return mod(**inputs) 2025-09-07T08:20:26.2855347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2855730Z outputs = self.model( 2025-09-07T08:20:26.2856100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2856496Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2856895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2857300Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2857641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2858002Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2858398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2858805Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2859208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2859628Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2860078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2860542Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2860705Z 2025-09-07T08:20:26.2860796Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2861009Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2861226Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2861455Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2861665Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2861867Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2862085Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2862289Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2862493Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2862719Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2863076Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2863435Z return mod(**inputs) 2025-09-07T08:20:26.2863809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2864200Z outputs = self.model( 2025-09-07T08:20:26.2864564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2864955Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2865358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2865756Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2866099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2866462Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2866859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2867281Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2867718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2868137Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2868588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2869104Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2869282Z 2025-09-07T08:20:26.2869395Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2869759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2870082Z return mod(**inputs) 2025-09-07T08:20:26.2870482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2870910Z outputs = self.model( 2025-09-07T08:20:26.2871310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1279, in forward 2025-09-07T08:20:26.2871738Z encoder_outputs = self.encoder( 2025-09-07T08:20:26.2872153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 841, in forward 2025-09-07T08:20:26.2872581Z layer_outputs = encoder_layer( 2025-09-07T08:20:26.2872961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2873351Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2873773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 312, in forward 2025-09-07T08:20:26.2874218Z hidden_states, attn_weights = self.self_attn( 2025-09-07T08:20:26.2874666Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2875117Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2875594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2876076Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2876258Z 2025-09-07T08:20:26.2876346Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2876581Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2876808Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2877024Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2877247Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2877471Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2877696Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2877954Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2878170Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2878392Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2878647Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2879036Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2879380Z return mod(**inputs) 2025-09-07T08:20:26.2879805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2880226Z outputs = self.model( 2025-09-07T08:20:26.2913628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2914209Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2914690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2915159Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2915565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2916036Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2916450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.2916873Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.2917290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2917702Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2918142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2918624Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2918814Z 2025-09-07T08:20:26.2918934Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2919295Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2919619Z return mod(**inputs) 2025-09-07T08:20:26.2919992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2920381Z outputs = self.model( 2025-09-07T08:20:26.2920775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2921182Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2921590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2922002Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2922397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2922787Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2923224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.2923696Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.2924130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2924555Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2924997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2925460Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2925639Z 2025-09-07T08:20:26.2925772Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2926044Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2926273Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2926490Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2926717Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2927059Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2927314Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2927542Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2927813Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2928264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2928640Z return mod(**inputs) 2025-09-07T08:20:26.2929033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2929416Z outputs = self.model( 2025-09-07T08:20:26.2929780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2930174Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2930572Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2930962Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2931325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2931698Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2932112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.2932558Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.2932984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2933405Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2933846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2934325Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2934506Z 2025-09-07T08:20:26.2934616Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2934980Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2935318Z return mod(**inputs) 2025-09-07T08:20:26.2935680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2936062Z outputs = self.model( 2025-09-07T08:20:26.2936417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2936803Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2937193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2937594Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2937943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2938298Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2938701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.2939127Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.2939550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2939956Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2940412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2940879Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2941045Z 2025-09-07T08:20:26.2941139Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2941361Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2941569Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2941784Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2941998Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2942227Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2942438Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2942651Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2942865Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2943084Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2943313Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2943680Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2944006Z return mod(**inputs) 2025-09-07T08:20:26.2944394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2944787Z outputs = self.model( 2025-09-07T08:20:26.2945359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2945774Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2946168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2946569Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2946914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2947284Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2947694Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.2948123Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.2948546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2948960Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2949406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2949881Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2950064Z 2025-09-07T08:20:26.2950179Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2950541Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2950864Z return mod(**inputs) 2025-09-07T08:20:26.2951243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2951639Z outputs = self.model( 2025-09-07T08:20:26.2952014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2952404Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2952801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2953195Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2953551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2953907Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2954289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.2954801Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.2955202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2955606Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2956033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2956504Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2956669Z 2025-09-07T08:20:26.2956751Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2956959Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2957163Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2957359Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2957559Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2957763Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2957956Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2958155Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2958383Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2958822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2959135Z return mod(**inputs) 2025-09-07T08:20:26.2959509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2959893Z outputs = self.model( 2025-09-07T08:20:26.2960269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2960658Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2961054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2961458Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2961815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2962182Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2962584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.2963016Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.2963457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2963906Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2964391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2964902Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2965112Z 2025-09-07T08:20:26.2965228Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2965615Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2965973Z return mod(**inputs) 2025-09-07T08:20:26.2966376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2966803Z outputs = self.model( 2025-09-07T08:20:26.2967338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2967789Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2968236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2968638Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2969005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2969381Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2969775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.2970196Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.2970627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2971043Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2971469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2971906Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2972060Z 2025-09-07T08:20:26.2972146Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2972351Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2972559Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2972764Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2972967Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2973178Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2973381Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2973585Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2973789Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2974018Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2974371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2974697Z return mod(**inputs) 2025-09-07T08:20:26.2975069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2975453Z outputs = self.model( 2025-09-07T08:20:26.2975826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2976221Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2976613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2977003Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2977344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2977708Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2978109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.2978527Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.2978943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2979351Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2979815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2980298Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2980486Z 2025-09-07T08:20:26.2980753Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2981120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2981446Z return mod(**inputs) 2025-09-07T08:20:26.2981826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2982230Z outputs = self.model( 2025-09-07T08:20:26.2982618Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2983044Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2983433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2983825Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2984174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2984532Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2984943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.2985362Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.2985776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2986191Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2986630Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.2987072Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.2987240Z 2025-09-07T08:20:26.2987336Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2987551Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2987766Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2987970Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2988184Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2988404Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2988607Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2988805Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.2989041Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2989406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2989750Z return mod(**inputs) 2025-09-07T08:20:26.2990122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2990505Z outputs = self.model( 2025-09-07T08:20:26.2990877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2991271Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2991660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2992047Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2992395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2992757Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2993151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.2993579Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.2993993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.2994408Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.2994849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.2995322Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.2995507Z 2025-09-07T08:20:26.2995623Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.2995984Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.2996316Z return mod(**inputs) 2025-09-07T08:20:26.2996722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.2997211Z outputs = self.model( 2025-09-07T08:20:26.2997574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.2997967Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.2998362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.2998785Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.2999146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.2999512Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.2999924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3000362Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3000799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3001224Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3001681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3002160Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3002343Z 2025-09-07T08:20:26.3002430Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3002665Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3002893Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3003112Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3003335Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3003559Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3003784Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3004001Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3004225Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3004446Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3004700Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3005094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3005447Z return mod(**inputs) 2025-09-07T08:20:26.3005871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3006290Z outputs = self.model( 2025-09-07T08:20:26.3006683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3007233Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3007664Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3008101Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3008495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3008865Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3009278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3009710Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3010141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3010568Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3011012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3011551Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3011745Z 2025-09-07T08:20:26.3011856Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3012227Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3012562Z return mod(**inputs) 2025-09-07T08:20:26.3012935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3013339Z outputs = self.model( 2025-09-07T08:20:26.3013719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3014120Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3014497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3014889Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3015246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3015632Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3016045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3016463Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3016887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3017280Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3017701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3018137Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3018292Z 2025-09-07T08:20:26.3018383Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3018593Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3018792Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3018994Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3019195Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3019396Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3019588Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3019819Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3020169Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3020488Z return mod(**inputs) 2025-09-07T08:20:26.3020846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3021242Z outputs = self.model( 2025-09-07T08:20:26.3021623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3022029Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3022447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3022865Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3023238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3023622Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3024012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3024432Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3024836Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3025253Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3025698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3026160Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3026334Z 2025-09-07T08:20:26.3026446Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3026787Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3027122Z return mod(**inputs) 2025-09-07T08:20:26.3027498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3027888Z outputs = self.model( 2025-09-07T08:20:26.3028258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3028664Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3029057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3029448Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3029820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3030189Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3030600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3031040Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3031479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3031909Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3032356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3032822Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3032994Z 2025-09-07T08:20:26.3033078Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3033299Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3033508Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3033722Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3033934Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3034149Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3034353Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3034565Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3034774Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3034984Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3035221Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3035595Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3035933Z return mod(**inputs) 2025-09-07T08:20:26.3036320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3036722Z outputs = self.model( 2025-09-07T08:20:26.3037097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3037506Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3037907Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3038314Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3038668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3039038Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3039478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3039904Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3040326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3040750Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3041200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3041695Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3041891Z 2025-09-07T08:20:26.3042012Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3042407Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3042755Z return mod(**inputs) 2025-09-07T08:20:26.3043169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3043592Z outputs = self.model( 2025-09-07T08:20:26.3044018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3044448Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3044875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3045518Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3045912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3046308Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3046751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3047335Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3047798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3048265Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3048724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3049168Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3049336Z 2025-09-07T08:20:26.3049417Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3049633Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3049847Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3050052Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3050266Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3050487Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3050693Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3050894Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3051137Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3051527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3051881Z return mod(**inputs) 2025-09-07T08:20:26.3052264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3052659Z outputs = self.model( 2025-09-07T08:20:26.3053049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3053441Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3053825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3054283Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3054628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3054990Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3055388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3055810Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3056248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3056661Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3057100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3057570Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3057753Z 2025-09-07T08:20:26.3057868Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3058218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3058540Z return mod(**inputs) 2025-09-07T08:20:26.3058938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3059336Z outputs = self.model( 2025-09-07T08:20:26.3059715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3060110Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3060508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3060906Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3061262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3061628Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3062064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3062508Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3062951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3063385Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3063833Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3064299Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3064474Z 2025-09-07T08:20:26.3064559Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3064786Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3065000Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3065208Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3065419Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3065634Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3065844Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3066047Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3066260Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3066503Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3066868Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3067191Z return mod(**inputs) 2025-09-07T08:20:26.3067573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3067974Z outputs = self.model( 2025-09-07T08:20:26.3068380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3068788Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3069187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3069566Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3069903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3070269Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3070650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3071056Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3071463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3071875Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3072307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3072789Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3072982Z 2025-09-07T08:20:26.3073091Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3073458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3073789Z return mod(**inputs) 2025-09-07T08:20:26.3074170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3074564Z outputs = self.model( 2025-09-07T08:20:26.3074944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3075352Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3075749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3076146Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3076508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3076869Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3077268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3077682Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3078089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3078500Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3078940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3079385Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3079543Z 2025-09-07T08:20:26.3079632Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3079841Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3080049Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3080259Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3080472Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3080679Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3080890Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3081102Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3081342Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3081713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3082086Z return mod(**inputs) 2025-09-07T08:20:26.3082470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3082869Z outputs = self.model( 2025-09-07T08:20:26.3083254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3083650Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3084064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3084464Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3084822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3085186Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3085593Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3086047Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3086530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3086632Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3087033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3087228Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3087234Z 2025-09-07T08:20:26.3087351Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3087558Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3087628Z return mod(**inputs) 2025-09-07T08:20:26.3087900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3087976Z outputs = self.model( 2025-09-07T08:20:26.3088247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3088327Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3088596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3088679Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3088895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3088984Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3089241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3089348Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3089616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3089713Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3090003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3090109Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3090112Z 2025-09-07T08:20:26.3090200Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090278Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090354Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090438Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090513Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090594Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090669Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090790Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090873Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3090976Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3091181Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3091249Z return mod(**inputs) 2025-09-07T08:20:26.3091504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3091596Z outputs = self.model( 2025-09-07T08:20:26.3091858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3091941Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3092199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3092276Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3092499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3092579Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3092858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3092960Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3093221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3093324Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3093609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3093750Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3093756Z 2025-09-07T08:20:26.3093861Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3094065Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3094132Z return mod(**inputs) 2025-09-07T08:20:26.3094393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3094470Z outputs = self.model( 2025-09-07T08:20:26.3094729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3094812Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3095070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3095145Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3095368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3095452Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3095716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3095815Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3096078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3096174Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3096458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3096573Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3096577Z 2025-09-07T08:20:26.3096655Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3096757Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3096851Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3096927Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3097012Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3097087Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3097172Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3097247Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3097348Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3097566Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3097635Z return mod(**inputs) 2025-09-07T08:20:26.3097900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3097969Z outputs = self.model( 2025-09-07T08:20:26.3098227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3098313Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3098570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3098671Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3098887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3098967Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3099229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3099337Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3099601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3099693Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3099986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3100112Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3100117Z 2025-09-07T08:20:26.3100220Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3100422Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3100489Z return mod(**inputs) 2025-09-07T08:20:26.3100753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3100821Z outputs = self.model( 2025-09-07T08:20:26.3101075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3101157Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3101427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3101503Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3101715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3101797Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3102049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3102153Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3102413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3102505Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3102787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3102923Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3102926Z 2025-09-07T08:20:26.3103004Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103088Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103161Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103242Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103316Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103405Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103486Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103562Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103641Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103714Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3103824Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3104018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3104084Z return mod(**inputs) 2025-09-07T08:20:26.3104338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3104419Z outputs = self.model( 2025-09-07T08:20:26.3104674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3104755Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3105006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3105085Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3105297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3105381Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3105631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3105728Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3105986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3106078Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3106361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3106488Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3106491Z 2025-09-07T08:20:26.3106591Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3106790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3106855Z return mod(**inputs) 2025-09-07T08:20:26.3107117Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3107183Z outputs = self.model( 2025-09-07T08:20:26.3107443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3107516Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3107768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3107856Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3108074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3108158Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3108400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3108524Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3108775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3108866Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3109138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3109238Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3109256Z 2025-09-07T08:20:26.3109337Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3109412Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3109485Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3109566Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3109640Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3109720Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3109796Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3109898Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3110095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3110173Z return mod(**inputs) 2025-09-07T08:20:26.3110434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3110500Z outputs = self.model( 2025-09-07T08:20:26.3110755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3110837Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3111088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3111164Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3111378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3111457Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3111716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3111821Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3112084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3112180Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3112468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3112595Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3112598Z 2025-09-07T08:20:26.3112699Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3112902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3112969Z return mod(**inputs) 2025-09-07T08:20:26.3113235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3113302Z outputs = self.model( 2025-09-07T08:20:26.3113561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3113644Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3113957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3114033Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3114245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3114354Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3114619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3114722Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3114986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3115077Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3115385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3115488Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3115491Z 2025-09-07T08:20:26.3115571Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3115655Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3115732Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3115819Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3115895Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3115968Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3116048Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3116137Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3116218Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3116292Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3116391Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3116590Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3116655Z return mod(**inputs) 2025-09-07T08:20:26.3116911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3116978Z outputs = self.model( 2025-09-07T08:20:26.3117227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3117311Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3117561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3117639Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3117847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3117928Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3118192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3118285Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3118533Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3118627Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3118910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3119034Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3119037Z 2025-09-07T08:20:26.3119135Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3119330Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3119393Z return mod(**inputs) 2025-09-07T08:20:26.3119646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3119709Z outputs = self.model( 2025-09-07T08:20:26.3119961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3120077Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3120333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3120411Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3120629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3120715Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3120996Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3121095Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3121365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3121457Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3121736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3121843Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3121846Z 2025-09-07T08:20:26.3121924Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122020Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122096Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122179Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122252Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122326Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122407Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122480Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3122587Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3122817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3122883Z return mod(**inputs) 2025-09-07T08:20:26.3123157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3123222Z outputs = self.model( 2025-09-07T08:20:26.3123489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3123565Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3123827Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3123908Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3124125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3124211Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3124469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3124585Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3124843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3124940Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3125227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3125355Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3125359Z 2025-09-07T08:20:26.3125468Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3125666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3125740Z return mod(**inputs) 2025-09-07T08:20:26.3126007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3126108Z outputs = self.model( 2025-09-07T08:20:26.3126383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3126460Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3126731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3126804Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3127257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3127409Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3127853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3128046Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3128535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3128666Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3129178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3129340Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3129346Z 2025-09-07T08:20:26.3129471Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3129571Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3129673Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3129780Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3129883Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3129991Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3130093Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3130211Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3130323Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3130467Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3130781Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3130873Z return mod(**inputs) 2025-09-07T08:20:26.3131449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3131531Z outputs = self.model( 2025-09-07T08:20:26.3131785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3131866Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3132164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3132246Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3132466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3132548Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3132815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3132915Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3133179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3133274Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3133561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3133693Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3133745Z 2025-09-07T08:20:26.3133874Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3134072Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3134138Z return mod(**inputs) 2025-09-07T08:20:26.3134399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3134467Z outputs = self.model( 2025-09-07T08:20:26.3134740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3134823Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3135076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3135153Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3135364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3135444Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3135703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3135815Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3136071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3136166Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3136452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3136558Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3136562Z 2025-09-07T08:20:26.3136641Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3136725Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3136805Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3136890Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3136967Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3137043Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3137129Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3137205Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3137313Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3137509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3137575Z return mod(**inputs) 2025-09-07T08:20:26.3137839Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3137908Z outputs = self.model( 2025-09-07T08:20:26.3138175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3138253Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3138509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3138588Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3138806Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3138891Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3139146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3139253Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3139515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3139611Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3139933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3140062Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3140065Z 2025-09-07T08:20:26.3140175Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3140373Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3140439Z return mod(**inputs) 2025-09-07T08:20:26.3140719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3140788Z outputs = self.model( 2025-09-07T08:20:26.3141051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3141124Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3141382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3141462Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3141692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3141780Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3142036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3142149Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3142404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3142498Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3142791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3142898Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3142902Z 2025-09-07T08:20:26.3142985Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143062Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143138Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143221Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143297Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143377Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143452Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143527Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143608Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3143710Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3143911Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3143978Z return mod(**inputs) 2025-09-07T08:20:26.3144237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3144311Z outputs = self.model( 2025-09-07T08:20:26.3144573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3144654Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3144916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3145158Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3145385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3145465Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3145731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3145917Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3146186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3146286Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3146571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3146733Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3146737Z 2025-09-07T08:20:26.3146842Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3147046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3147114Z return mod(**inputs) 2025-09-07T08:20:26.3147384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3147455Z outputs = self.model( 2025-09-07T08:20:26.3147717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3147821Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3148082Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3148163Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3148381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3148462Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3148724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3148822Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3149087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3149182Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3149463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3149577Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3149580Z 2025-09-07T08:20:26.3149659Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3149747Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3149821Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3149903Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3149979Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3150054Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3150137Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3150213Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3150316Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3150518Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3150582Z return mod(**inputs) 2025-09-07T08:20:26.3150900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3150969Z outputs = self.model( 2025-09-07T08:20:26.3151234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3151309Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3151568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3151648Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3151865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3151983Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3152245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3152354Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3152620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3152731Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3153022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3153149Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3153153Z 2025-09-07T08:20:26.3153261Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3153461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3153527Z return mod(**inputs) 2025-09-07T08:20:26.3153812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3153880Z outputs = self.model( 2025-09-07T08:20:26.3154142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3154217Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3154484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3154561Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3154772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3154857Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3155110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3155214Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3155471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3155563Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3155844Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3155945Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3155949Z 2025-09-07T08:20:26.3156033Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156108Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156182Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156266Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156339Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156419Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156492Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156566Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156645Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156716Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3156821Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3157010Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3157074Z return mod(**inputs) 2025-09-07T08:20:26.3157330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3157397Z outputs = self.model( 2025-09-07T08:20:26.3157655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3157760Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3158011Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3158090Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3158298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3158381Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3158644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3158742Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3158998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3159091Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3159373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3159497Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3159515Z 2025-09-07T08:20:26.3159624Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3159812Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3159877Z return mod(**inputs) 2025-09-07T08:20:26.3160135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3160200Z outputs = self.model( 2025-09-07T08:20:26.3160456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3160530Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3160785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3160863Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3161071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3161156Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3161407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 407, in forward 2025-09-07T08:20:26.3161510Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:20:26.3161760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3161854Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3162135Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3162243Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3162246Z 2025-09-07T08:20:26.3162330Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3162408Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3162482Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3162566Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3162640Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3162724Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3162799Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3162902Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3163104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3163169Z return mod(**inputs) 2025-09-07T08:20:26.3163432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3163540Z outputs = self.model( 2025-09-07T08:20:26.3163799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3163882Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3164138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3164219Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3164451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3164539Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3164800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3164908Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3165174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3165270Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3165576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 81, in sdpa_attention_forward 2025-09-07T08:20:26.3165706Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:20:26.3165710Z 2025-09-07T08:20:26.3165813Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3166018Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3166085Z return mod(**inputs) 2025-09-07T08:20:26.3166348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1471, in forward 2025-09-07T08:20:26.3166417Z outputs = self.model( 2025-09-07T08:20:26.3166682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1297, in forward 2025-09-07T08:20:26.3166757Z decoder_outputs = self.decoder( 2025-09-07T08:20:26.3167086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1115, in forward 2025-09-07T08:20:26.3167176Z layer_outputs = decoder_layer( 2025-09-07T08:20:26.3167414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:20:26.3167528Z return super().__call__(*args, **kwargs) 2025-09-07T08:20:26.3167818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 424, in forward 2025-09-07T08:20:26.3167935Z hidden_states, cross_attn_weights = self.encoder_attn( 2025-09-07T08:20:26.3168230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 253, in forward 2025-09-07T08:20:26.3168348Z attn_output, attn_weights = attention_interface( 2025-09-07T08:20:26.3168629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/integrations/sdpa_attention.py", line 91, in sdpa_attention_forward 2025-09-07T08:20:26.3168732Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:20:26.3168736Z 2025-09-07T08:20:26.3168818Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3168894Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3168970Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3169051Z cudagraph partition due to non gpu ops 2025-09-07T08:20:26.3169151Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:20:26.3169347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:20:26.3169411Z return mod(**inputs) 2025-09-07T08:20:26.3169701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 1494, in forward 2025-09-07T08:20:26.3169943Z masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:20:26.3169947Z 2025-09-07T08:20:46.6060759Z Compilation time (from dynamo_timed): 65.692156716 2025-09-07T08:20:46.6068596Z pass 2025-09-07T08:20:46.6070108Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:20:46.6074313Z TIMING: _recursive_pre_grad_passes:0.08972 _recursive_joint_graph_passes:0.84131 _recursive_post_grad_passes:0.1733 linear_unary_template_precompiling:0.03462 async_compile.wait:0.8239 code_gen:19.906 inductor_compile:46.17321 backend_compile:59.78679 gc:0.00023 entire_frame_compile:65.69216 total_wall_time:65.69216 2025-09-07T08:20:46.6075766Z STATS: call_* op count: 967 | FakeTensorMode.__torch_dispatch__:69861 | FakeTensor.__torch_dispatch__:7456 | ProxyTorchDispatchMode.__torch_dispatch__:19145 2025-09-07T08:20:46.6076445Z Dynamo produced 1 graphs covering 967 ops with 0 graph breaks (0 unique) 2025-09-07T08:20:50.2234378Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:20:50.2235321Z import pynvml # type: ignore[import] 2025-09-07T08:20:52.9173075Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:20:52.9173990Z from pkg_resources import resource_filename 2025-09-07T08:20:53.6208122Z 2025-09-07T08:20:53.6321519Z loading model: 0it [00:00, ?it/s]If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.` 2025-09-07T08:20:53.6322216Z WARNING:transformers.models.roberta.modeling_roberta:If you want to use `RobertaLMHeadModel` as a standalone, add `is_decoder=True.` 2025-09-07T08:20:54.9671659Z We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-09-07T08:20:54.9672672Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-09-07T08:20:54.9673641Z WARNING:transformers.modeling_utils:We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-09-07T08:20:54.9674601Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-09-07T08:20:55.1109487Z 2025-09-07T08:20:55.1110210Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:20:55.1110698Z cpu eval RobertaForCausalLM 2025-09-07T08:20:55.6526801Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:20:55.8165930Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:20:55.9897628Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:21:21.1760801Z Autotune Choices Stats: 2025-09-07T08:21:21.1765438Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 6.641227499812885} 2025-09-07T08:21:21.1766162Z AUTOTUNE linear_unary(512x768, 50265x768, 50265) 2025-09-07T08:21:21.1766511Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:21:21.1767115Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:21:21.1767511Z _linear_pointwise 6.6412 ms 100.0% 2025-09-07T08:21:21.1767756Z cpp_CppMicroGemmAMX_73 8.6842 ms 76.5% 2025-09-07T08:21:21.1768152Z SingleProcess AUTOTUNE benchmarking takes 0.8688 seconds and 1.3710 seconds precompiling for 2 choices 2025-09-07T08:21:21.6049780Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6050324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6050675Z return mod(**inputs) 2025-09-07T08:21:21.6051407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6051806Z outputs = self.roberta( 2025-09-07T08:21:21.6052188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-09-07T08:21:21.6052590Z embedding_output = self.embeddings( 2025-09-07T08:21:21.6053042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-09-07T08:21:21.6053623Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T08:21:21.6054331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1576, in create_position_ids_from_input_ids 2025-09-07T08:21:21.6054857Z mask = input_ids.ne(padding_idx).int() 2025-09-07T08:21:21.6055013Z 2025-09-07T08:21:21.6055099Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6055333Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6055539Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6055736Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6055942Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6056151Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6056358Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6056558Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6056760Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6056963Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6057169Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6057367Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6057606Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6057987Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6058314Z return mod(**inputs) 2025-09-07T08:21:21.6058675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6059056Z outputs = self.roberta( 2025-09-07T08:21:21.6059424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-09-07T08:21:21.6059822Z embedding_output = self.embeddings( 2025-09-07T08:21:21.6060207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-09-07T08:21:21.6060707Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T08:21:21.6061274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-09-07T08:21:21.6061827Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T08:21:21.6062060Z 2025-09-07T08:21:21.6062174Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6062527Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6062848Z return mod(**inputs) 2025-09-07T08:21:21.6063217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6063716Z outputs = self.roberta( 2025-09-07T08:21:21.6064127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-09-07T08:21:21.6064542Z embedding_output = self.embeddings( 2025-09-07T08:21:21.6064943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-09-07T08:21:21.6065525Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T08:21:21.6066146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-09-07T08:21:21.6066706Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T08:21:21.6066942Z 2025-09-07T08:21:21.6067034Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6067246Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6067466Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6067670Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6067888Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6068085Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6068287Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6068521Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6068880Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6069199Z return mod(**inputs) 2025-09-07T08:21:21.6069573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6069963Z outputs = self.roberta( 2025-09-07T08:21:21.6070338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6070743Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6071140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6071543Z layer_outputs = layer_module( 2025-09-07T08:21:21.6071909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6072290Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6072695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6073112Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6073499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6073877Z return func(*args, **kwargs) 2025-09-07T08:21:21.6074263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6074641Z self_outputs = self.self( 2025-09-07T08:21:21.6075005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6075380Z return func(*args, **kwargs) 2025-09-07T08:21:21.6075765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6076220Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6076404Z 2025-09-07T08:21:21.6076485Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6076697Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6076908Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6077115Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6077381Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6077590Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6077799Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6077997Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6078205Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6078412Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6078619Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6078886Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6079140Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6079501Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6079827Z return mod(**inputs) 2025-09-07T08:21:21.6080203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6080605Z outputs = self.roberta( 2025-09-07T08:21:21.6080993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6081408Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6081858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6082255Z layer_outputs = layer_module( 2025-09-07T08:21:21.6082621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6082993Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6083418Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6083855Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6084267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6084671Z return func(*args, **kwargs) 2025-09-07T08:21:21.6085085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6085524Z self_outputs = self.self( 2025-09-07T08:21:21.6085909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6086309Z return func(*args, **kwargs) 2025-09-07T08:21:21.6086727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6087481Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6087682Z 2025-09-07T08:21:21.6087777Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6088001Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6088228Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6088457Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6088685Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6088900Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6089124Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6089349Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6089574Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6089791Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6090014Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6090237Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6090491Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6090872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6091223Z return mod(**inputs) 2025-09-07T08:21:21.6091623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6092080Z outputs = self.roberta( 2025-09-07T08:21:21.6092504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6092919Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6093336Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6093754Z layer_outputs = layer_module( 2025-09-07T08:21:21.6094154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6094541Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6094924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6095319Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6095692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6096057Z return func(*args, **kwargs) 2025-09-07T08:21:21.6096421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6096822Z self_outputs = self.self( 2025-09-07T08:21:21.6097172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6097530Z return func(*args, **kwargs) 2025-09-07T08:21:21.6097899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6098323Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6098504Z 2025-09-07T08:21:21.6098582Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6098790Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6098991Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6099216Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6099421Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6099622Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6099824Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6100021Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6100226Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6100453Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6100655Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6100850Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6101083Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6101439Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6101764Z return mod(**inputs) 2025-09-07T08:21:21.6102136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6102525Z outputs = self.roberta( 2025-09-07T08:21:21.6102901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6103296Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6103691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6104068Z layer_outputs = layer_module( 2025-09-07T08:21:21.6104415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6104769Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6105161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6105557Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6105922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6106316Z return func(*args, **kwargs) 2025-09-07T08:21:21.6106691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6107069Z self_outputs = self.self( 2025-09-07T08:21:21.6107414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6107802Z return func(*args, **kwargs) 2025-09-07T08:21:21.6108175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6108615Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6108792Z 2025-09-07T08:21:21.6108876Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6109076Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6109284Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6109488Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6109691Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6109886Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6110107Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6110313Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6110517Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6110711Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6110913Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6111114Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6111342Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6111685Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6112007Z return mod(**inputs) 2025-09-07T08:21:21.6112377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6112769Z outputs = self.roberta( 2025-09-07T08:21:21.6113130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6113505Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6113879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6114256Z layer_outputs = layer_module( 2025-09-07T08:21:21.6114597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6114954Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6115342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6115750Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6116123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6116490Z return func(*args, **kwargs) 2025-09-07T08:21:21.6116861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6117249Z self_outputs = self.self( 2025-09-07T08:21:21.6117599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6117960Z return func(*args, **kwargs) 2025-09-07T08:21:21.6118327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6118752Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6118935Z 2025-09-07T08:21:21.6119013Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6119244Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6119469Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6119665Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6119871Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6120075Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6120280Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6120477Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6120678Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6120880Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6121098Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6121294Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6121523Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6121869Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6122188Z return mod(**inputs) 2025-09-07T08:21:21.6122546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6122924Z outputs = self.roberta( 2025-09-07T08:21:21.6123312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6123709Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6124098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6124493Z layer_outputs = layer_module( 2025-09-07T08:21:21.6124851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6125221Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6125633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6126049Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6126437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6126817Z return func(*args, **kwargs) 2025-09-07T08:21:21.6127352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6127764Z self_outputs = self.self( 2025-09-07T08:21:21.6128159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6128551Z return func(*args, **kwargs) 2025-09-07T08:21:21.6128945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6129401Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6129587Z 2025-09-07T08:21:21.6129679Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6129894Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6130111Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6130324Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6130538Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6130744Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6130957Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6131167Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6131381Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6131595Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6131804Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6132018Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6132261Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6132662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6132994Z return mod(**inputs) 2025-09-07T08:21:21.6133426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6133828Z outputs = self.roberta( 2025-09-07T08:21:21.6134217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6134612Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6135028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6135429Z layer_outputs = layer_module( 2025-09-07T08:21:21.6135793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6136182Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6136585Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6137001Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6137397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6137777Z return func(*args, **kwargs) 2025-09-07T08:21:21.6138184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6138572Z self_outputs = self.self( 2025-09-07T08:21:21.6138917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6139270Z return func(*args, **kwargs) 2025-09-07T08:21:21.6139632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6140060Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6140245Z 2025-09-07T08:21:21.6140326Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6140534Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6140741Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6140947Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6141153Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6141367Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6141585Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6141786Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6141980Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6142180Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6142381Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6142587Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6142854Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6143198Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6143512Z return mod(**inputs) 2025-09-07T08:21:21.6143866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6144228Z outputs = self.roberta( 2025-09-07T08:21:21.6144586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6145003Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6145566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6145941Z layer_outputs = layer_module( 2025-09-07T08:21:21.6146265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6146613Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6146995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6147463Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6147825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6148187Z return func(*args, **kwargs) 2025-09-07T08:21:21.6148556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6148938Z self_outputs = self.self( 2025-09-07T08:21:21.6149340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6149692Z return func(*args, **kwargs) 2025-09-07T08:21:21.6150058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6150497Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6150669Z 2025-09-07T08:21:21.6150753Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6150954Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6151148Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6152253Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6152468Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6152670Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6152874Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6153072Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6153269Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6153467Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6153659Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6153856Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6154082Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6154428Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6154736Z return mod(**inputs) 2025-09-07T08:21:21.6155089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6155464Z outputs = self.roberta( 2025-09-07T08:21:21.6155822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6156197Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6156561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6156932Z layer_outputs = layer_module( 2025-09-07T08:21:21.6157265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6157609Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6157980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6158366Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6158730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6159085Z return func(*args, **kwargs) 2025-09-07T08:21:21.6159444Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6159808Z self_outputs = self.self( 2025-09-07T08:21:21.6160153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6160515Z return func(*args, **kwargs) 2025-09-07T08:21:21.6160889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6161356Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6161569Z 2025-09-07T08:21:21.6161649Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6161862Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6162068Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6162277Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6162476Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6162681Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6162888Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6163113Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6163317Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6163524Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6163733Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6163937Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6164165Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6164525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6164855Z return mod(**inputs) 2025-09-07T08:21:21.6165224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6165621Z outputs = self.roberta( 2025-09-07T08:21:21.6165998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6166387Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6166773Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6167246Z layer_outputs = layer_module( 2025-09-07T08:21:21.6167606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6168012Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6168462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6168962Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6169347Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6169716Z return func(*args, **kwargs) 2025-09-07T08:21:21.6170100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6170494Z self_outputs = self.self( 2025-09-07T08:21:21.6170862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6171227Z return func(*args, **kwargs) 2025-09-07T08:21:21.6171612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6172066Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6172248Z 2025-09-07T08:21:21.6172335Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6172550Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6172755Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6172964Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6173168Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6173454Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6173658Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6173865Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6174070Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6174275Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6174474Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6174681Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6174918Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6175332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6175660Z return mod(**inputs) 2025-09-07T08:21:21.6176046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6176441Z outputs = self.roberta( 2025-09-07T08:21:21.6176815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6177225Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6177606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6177998Z layer_outputs = layer_module( 2025-09-07T08:21:21.6178356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6178729Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6179130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6179542Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6179938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6180301Z return func(*args, **kwargs) 2025-09-07T08:21:21.6180676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6181045Z self_outputs = self.self( 2025-09-07T08:21:21.6181399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6181767Z return func(*args, **kwargs) 2025-09-07T08:21:21.6182146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6182584Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6182759Z 2025-09-07T08:21:21.6182839Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6183049Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6183253Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6183458Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6183651Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6183852Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6184056Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6184256Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6184450Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6184650Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6184850Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6185051Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6185272Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6185625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6185941Z return mod(**inputs) 2025-09-07T08:21:21.6186306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 999, in forward 2025-09-07T08:21:21.6186686Z outputs = self.roberta( 2025-09-07T08:21:21.6187045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:21.6187428Z encoder_outputs = self.encoder( 2025-09-07T08:21:21.6187804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:21.6188179Z layer_outputs = layer_module( 2025-09-07T08:21:21.6188512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:21.6188899Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:21.6189288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:21.6189694Z self_attention_outputs = self.attention( 2025-09-07T08:21:21.6190075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6190440Z return func(*args, **kwargs) 2025-09-07T08:21:21.6190842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:21.6191229Z self_outputs = self.self( 2025-09-07T08:21:21.6191592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:21.6191962Z return func(*args, **kwargs) 2025-09-07T08:21:21.6192337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:21.6192783Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:21.6192982Z 2025-09-07T08:21:21.6193061Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6193285Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6193486Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6193689Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6193891Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6194095Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6194316Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6194517Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6194726Z cudagraph partition due to non gpu ops 2025-09-07T08:21:21.6194956Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:21.6195307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:21.6195622Z return mod(**inputs) 2025-09-07T08:21:21.6195987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1022, in forward 2025-09-07T08:21:21.6196375Z lm_loss = self.loss_function( 2025-09-07T08:21:21.6196740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-09-07T08:21:21.6197188Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-09-07T08:21:21.6197656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-09-07T08:21:21.6198141Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-09-07T08:21:21.6198388Z 2025-09-07T08:21:25.5412395Z Compilation time (from dynamo_timed): 28.217513394 2025-09-07T08:21:25.5481743Z pass 2025-09-07T08:21:25.5482426Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:21:25.5483957Z TIMING: _recursive_pre_grad_passes:0.03463 _recursive_joint_graph_passes:0.40192 _recursive_post_grad_passes:0.07401 linear_unary_template_precompiling:1.38405 linear_unary_template_autotuning:0.86661 async_compile.wait:0.79647 code_gen:3.29 inductor_compile:20.2827 backend_compile:25.43698 gc:0.0006 entire_frame_compile:28.21751 total_wall_time:28.21751 2025-09-07T08:21:25.5485250Z STATS: call_* op count: 305 | FakeTensorMode.__torch_dispatch__:27113 | FakeTensor.__torch_dispatch__:3000 | ProxyTorchDispatchMode.__torch_dispatch__:7255 2025-09-07T08:21:25.5485805Z Dynamo produced 1 graphs covering 305 ops with 0 graph breaks (0 unique) 2025-09-07T08:21:28.4454516Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:21:28.4457936Z import pynvml # type: ignore[import] 2025-09-07T08:21:31.2147086Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:21:31.2148075Z from pkg_resources import resource_filename 2025-09-07T08:21:31.8668866Z 2025-09-07T08:21:33.0308858Z loading model: 0it [00:00, ?it/s]We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-09-07T08:21:33.0309932Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-09-07T08:21:33.0310957Z WARNING:transformers.modeling_utils:We strongly recommend passing in an `attention_mask` since your input_ids may be padded. See https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked. 2025-09-07T08:21:33.0313028Z You may ignore this warning if your `pad_token_id` (0) is identical to the `bos_token_id` (0), `eos_token_id` (2), or the `sep_token_id` (None), and your input is not padded. 2025-09-07T08:21:33.1355275Z 2025-09-07T08:21:33.1356130Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:21:33.1356460Z cpu eval RobertaForQuestionAnswering 2025-09-07T08:21:33.5285295Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:21:33.6629747Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:21:33.8016512Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:21:56.8602549Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8603079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8603463Z return mod(**inputs) 2025-09-07T08:21:56.8603944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8604408Z outputs = self.roberta( 2025-09-07T08:21:56.8604852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-09-07T08:21:56.8605318Z embedding_output = self.embeddings( 2025-09-07T08:21:56.8605761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-09-07T08:21:56.8606375Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T08:21:56.8607345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1576, in create_position_ids_from_input_ids 2025-09-07T08:21:56.8607911Z mask = input_ids.ne(padding_idx).int() 2025-09-07T08:21:56.8608078Z 2025-09-07T08:21:56.8608173Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8608412Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8608630Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8608848Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8609057Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8609271Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8609482Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8609702Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8609941Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8610173Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8610759Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8611051Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8611323Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8611737Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8612080Z return mod(**inputs) 2025-09-07T08:21:56.8612472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1530, in forward 2025-09-07T08:21:56.8612934Z logits = self.qa_outputs(sequence_output) 2025-09-07T08:21:56.8613079Z 2025-09-07T08:21:56.8613227Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8613589Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8613906Z return mod(**inputs) 2025-09-07T08:21:56.8614284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8614691Z outputs = self.roberta( 2025-09-07T08:21:56.8615073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-09-07T08:21:56.8615520Z embedding_output = self.embeddings( 2025-09-07T08:21:56.8615929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-09-07T08:21:56.8616449Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T08:21:56.8617033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-09-07T08:21:56.8617643Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T08:21:56.8617905Z 2025-09-07T08:21:56.8618023Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8618384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8618717Z return mod(**inputs) 2025-09-07T08:21:56.8619102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8619505Z outputs = self.roberta( 2025-09-07T08:21:56.8619886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 826, in forward 2025-09-07T08:21:56.8620292Z embedding_output = self.embeddings( 2025-09-07T08:21:56.8620696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 89, in forward 2025-09-07T08:21:56.8621222Z position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx, past_key_values_length) 2025-09-07T08:21:56.8621818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1577, in create_position_ids_from_input_ids 2025-09-07T08:21:56.8622385Z incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask 2025-09-07T08:21:56.8622619Z 2025-09-07T08:21:56.8622702Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8622916Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8623124Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8623331Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8623532Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8623739Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8623944Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8624176Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8624525Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8624849Z return mod(**inputs) 2025-09-07T08:21:56.8625250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8625662Z outputs = self.roberta( 2025-09-07T08:21:56.8626046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8626443Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8626847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8627265Z layer_outputs = layer_module( 2025-09-07T08:21:56.8627634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8628008Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8628431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8628865Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8629265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8629681Z return func(*args, **kwargs) 2025-09-07T08:21:56.8630122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8630529Z self_outputs = self.self( 2025-09-07T08:21:56.8630909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8631293Z return func(*args, **kwargs) 2025-09-07T08:21:56.8631687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8632146Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8632344Z 2025-09-07T08:21:56.8632429Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8632649Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8632866Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8633084Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8633291Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8633501Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8633709Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8633912Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8634124Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8634340Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8634550Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8634752Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8634995Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8635364Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8635703Z return mod(**inputs) 2025-09-07T08:21:56.8636112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8636537Z outputs = self.roberta( 2025-09-07T08:21:56.8636954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8637360Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8637760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8638154Z layer_outputs = layer_module( 2025-09-07T08:21:56.8638518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8638895Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8639307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8639756Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8640141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8640523Z return func(*args, **kwargs) 2025-09-07T08:21:56.8640911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8641311Z self_outputs = self.self( 2025-09-07T08:21:56.8641692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8642069Z return func(*args, **kwargs) 2025-09-07T08:21:56.8642466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8642927Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8643116Z 2025-09-07T08:21:56.8643204Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8643415Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8643632Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8643846Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8644085Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8644302Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8644526Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8644747Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8644973Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8645429Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8645657Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8645884Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8646138Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8646567Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8646925Z return mod(**inputs) 2025-09-07T08:21:56.8647467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8647896Z outputs = self.roberta( 2025-09-07T08:21:56.8648373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8648782Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8649189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8649595Z layer_outputs = layer_module( 2025-09-07T08:21:56.8649953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8650328Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8650736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8651153Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8651548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8651925Z return func(*args, **kwargs) 2025-09-07T08:21:56.8652318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8652719Z self_outputs = self.self( 2025-09-07T08:21:56.8653090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8653462Z return func(*args, **kwargs) 2025-09-07T08:21:56.8653832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8654266Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8654533Z 2025-09-07T08:21:56.8654622Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8654835Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8655041Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8655252Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8655462Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8655669Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8655883Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8656152Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8656386Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8656594Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8656863Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8657073Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8657311Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8657684Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8657999Z return mod(**inputs) 2025-09-07T08:21:56.8658370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8658785Z outputs = self.roberta( 2025-09-07T08:21:56.8659160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8659551Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8659934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8660328Z layer_outputs = layer_module( 2025-09-07T08:21:56.8660679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8661040Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8661442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8661827Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8662212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8662584Z return func(*args, **kwargs) 2025-09-07T08:21:56.8662966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8663352Z self_outputs = self.self( 2025-09-07T08:21:56.8663723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8664084Z return func(*args, **kwargs) 2025-09-07T08:21:56.8664457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8664893Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8665067Z 2025-09-07T08:21:56.8665145Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8665354Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8665560Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8665762Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8665957Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8666161Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8666361Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8666562Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8666757Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8666961Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8667165Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8667379Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8667601Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8667988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8668311Z return mod(**inputs) 2025-09-07T08:21:56.8668693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8669072Z outputs = self.roberta( 2025-09-07T08:21:56.8669437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8669847Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8670243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8670640Z layer_outputs = layer_module( 2025-09-07T08:21:56.8671007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8671360Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8671761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8672160Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8672602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8672968Z return func(*args, **kwargs) 2025-09-07T08:21:56.8673349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8673743Z self_outputs = self.self( 2025-09-07T08:21:56.8674103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8674472Z return func(*args, **kwargs) 2025-09-07T08:21:56.8674847Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8675306Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8675492Z 2025-09-07T08:21:56.8675573Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8675791Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8676003Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8676221Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8676435Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8676648Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8676856Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8677074Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8677280Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8677485Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8677685Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8677891Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8678125Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8678524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8678853Z return mod(**inputs) 2025-09-07T08:21:56.8679226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8679620Z outputs = self.roberta( 2025-09-07T08:21:56.8679995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8680392Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8680774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8681164Z layer_outputs = layer_module( 2025-09-07T08:21:56.8681512Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8681919Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8682320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8682726Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8683125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8683506Z return func(*args, **kwargs) 2025-09-07T08:21:56.8683921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8684318Z self_outputs = self.self( 2025-09-07T08:21:56.8684681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8685058Z return func(*args, **kwargs) 2025-09-07T08:21:56.8685450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8685907Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8686099Z 2025-09-07T08:21:56.8686185Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8686434Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8686665Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8686892Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8687235Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8687461Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8687687Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8687915Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8688142Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8688363Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8688597Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8688814Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8689066Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8689433Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8689774Z return mod(**inputs) 2025-09-07T08:21:56.8690170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8690574Z outputs = self.roberta( 2025-09-07T08:21:56.8690954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8691361Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8691761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8692201Z layer_outputs = layer_module( 2025-09-07T08:21:56.8692558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8692931Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8693342Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8693757Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8694150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8694533Z return func(*args, **kwargs) 2025-09-07T08:21:56.8694914Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8695311Z self_outputs = self.self( 2025-09-07T08:21:56.8695681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8696060Z return func(*args, **kwargs) 2025-09-07T08:21:56.8696472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8697822Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8698017Z 2025-09-07T08:21:56.8698100Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8698323Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8698539Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8698748Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8698981Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8699199Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8699411Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8699626Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8699831Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8700039Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8700247Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8700459Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8700704Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8701077Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8701415Z return mod(**inputs) 2025-09-07T08:21:56.8701835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8702314Z outputs = self.roberta( 2025-09-07T08:21:56.8702703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8703105Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8703504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8703899Z layer_outputs = layer_module( 2025-09-07T08:21:56.8704260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8704634Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8705046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8705467Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8705859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8706240Z return func(*args, **kwargs) 2025-09-07T08:21:56.8706634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8707037Z self_outputs = self.self( 2025-09-07T08:21:56.8707397Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8707775Z return func(*args, **kwargs) 2025-09-07T08:21:56.8708177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8708631Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8708816Z 2025-09-07T08:21:56.8708907Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8709117Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8709339Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8709564Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8709792Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8710011Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8710223Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8710435Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8710649Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8710853Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8711088Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8711345Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8711600Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8711977Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8712314Z return mod(**inputs) 2025-09-07T08:21:56.8712698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8713100Z outputs = self.roberta( 2025-09-07T08:21:56.8713497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8713898Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8714294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8714714Z layer_outputs = layer_module( 2025-09-07T08:21:56.8715096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8715482Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8715934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8716375Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8716788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8717188Z return func(*args, **kwargs) 2025-09-07T08:21:56.8717575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8717996Z self_outputs = self.self( 2025-09-07T08:21:56.8718388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8718790Z return func(*args, **kwargs) 2025-09-07T08:21:56.8719194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8719678Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8719877Z 2025-09-07T08:21:56.8719963Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8720194Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8720423Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8720642Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8720866Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8721090Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8721313Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8721526Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8721749Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8721973Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8722200Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8722415Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8722668Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8723056Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8723408Z return mod(**inputs) 2025-09-07T08:21:56.8723810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8724237Z outputs = self.roberta( 2025-09-07T08:21:56.8724640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8725063Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8725481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8725941Z layer_outputs = layer_module( 2025-09-07T08:21:56.8726320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8726715Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8727311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8727771Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8728263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8728685Z return func(*args, **kwargs) 2025-09-07T08:21:56.8729105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8729533Z self_outputs = self.self( 2025-09-07T08:21:56.8729924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8730339Z return func(*args, **kwargs) 2025-09-07T08:21:56.8730753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8731265Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8731455Z 2025-09-07T08:21:56.8731545Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8731755Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8731973Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8732187Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8732400Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8732614Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8732820Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8733026Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8733233Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8733435Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8733645Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8733850Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8734088Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8734446Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8734772Z return mod(**inputs) 2025-09-07T08:21:56.8735188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8735579Z outputs = self.roberta( 2025-09-07T08:21:56.8735952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8736335Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8736717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8737112Z layer_outputs = layer_module( 2025-09-07T08:21:56.8737464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8737827Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8738236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8738647Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8739041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8739427Z return func(*args, **kwargs) 2025-09-07T08:21:56.8739801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8740188Z self_outputs = self.self( 2025-09-07T08:21:56.8740547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8740951Z return func(*args, **kwargs) 2025-09-07T08:21:56.8741325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8741759Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8741945Z 2025-09-07T08:21:56.8742022Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8742235Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8742462Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8742666Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8742872Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8743078Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8743283Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8743484Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8743693Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8743899Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8744104Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8744302Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8744554Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8744909Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8745433Z return mod(**inputs) 2025-09-07T08:21:56.8745813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1516, in forward 2025-09-07T08:21:56.8746205Z outputs = self.roberta( 2025-09-07T08:21:56.8746582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 890, in forward 2025-09-07T08:21:56.8746977Z encoder_outputs = self.encoder( 2025-09-07T08:21:56.8747369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 631, in forward 2025-09-07T08:21:56.8747756Z layer_outputs = layer_module( 2025-09-07T08:21:56.8748124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:21:56.8748485Z return super().__call__(*args, **kwargs) 2025-09-07T08:21:56.8748888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 539, in forward 2025-09-07T08:21:56.8749283Z self_attention_outputs = self.attention( 2025-09-07T08:21:56.8749653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8750027Z return func(*args, **kwargs) 2025-09-07T08:21:56.8750412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 466, in forward 2025-09-07T08:21:56.8750813Z self_outputs = self.self( 2025-09-07T08:21:56.8751168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/deprecation.py", line 172, in wrapped_func 2025-09-07T08:21:56.8751524Z return func(*args, **kwargs) 2025-09-07T08:21:56.8751899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 388, in forward 2025-09-07T08:21:56.8752360Z attn_output = torch.nn.functional.scaled_dot_product_attention( 2025-09-07T08:21:56.8752548Z 2025-09-07T08:21:56.8752638Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8752852Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8753068Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8753282Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8753495Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8753700Z cudagraph partition due to non gpu ops 2025-09-07T08:21:56.8753946Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8754351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8754736Z return mod(**inputs) 2025-09-07T08:21:56.8755102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1548, in forward 2025-09-07T08:21:56.8755514Z start_loss = loss_fct(start_logits, start_positions) 2025-09-07T08:21:56.8755679Z 2025-09-07T08:21:56.8755783Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:21:56.8756164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:21:56.8756489Z return mod(**inputs) 2025-09-07T08:21:56.8756855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/roberta/modeling_roberta.py", line 1549, in forward 2025-09-07T08:21:56.8757274Z end_loss = loss_fct(end_logits, end_positions) 2025-09-07T08:21:56.8757429Z 2025-09-07T08:22:00.2794436Z Compilation time (from dynamo_timed): 25.290820853 2025-09-07T08:22:00.2795014Z pass 2025-09-07T08:22:00.2797269Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:22:00.2798554Z TIMING: _recursive_pre_grad_passes:0.03545 _recursive_joint_graph_passes:0.3808 _recursive_post_grad_passes:0.07716 linear_unary_template_precompiling:0.01312 async_compile.wait:0.45266 code_gen:2.92745 inductor_compile:17.51192 backend_compile:22.53276 gc:0.00142 entire_frame_compile:25.29082 total_wall_time:25.29082 2025-09-07T08:22:00.2799820Z STATS: call_* op count: 305 | FakeTensorMode.__torch_dispatch__:26931 | FakeTensor.__torch_dispatch__:3024 | ProxyTorchDispatchMode.__torch_dispatch__:7255 2025-09-07T08:22:00.2800389Z Dynamo produced 1 graphs covering 305 ops with 0 graph breaks (0 unique) 2025-09-07T08:22:03.0748778Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:22:03.0749790Z import pynvml # type: ignore[import] 2025-09-07T08:22:05.7968667Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:22:05.7969548Z from pkg_resources import resource_filename 2025-09-07T08:22:06.4603476Z 2025-09-07T08:22:07.4900291Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:22:07.4900955Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:22:07.4901497Z cpu eval T5ForConditionalGeneration 2025-09-07T08:22:08.5007887Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:22:08.8959849Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:22:09.2991167Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:22:29.9664067Z Autotune Choices Stats: 2025-09-07T08:22:29.9667153Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.04152599990447925} 2025-09-07T08:22:29.9676443Z AUTOTUNE linear_unary(1024x512, 512x512) 2025-09-07T08:22:29.9676693Z strides: [512, 1], [1, 0] 2025-09-07T08:22:29.9676919Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:22:29.9677187Z cpp_CppMicroGemmAMX_0 0.0415 ms 100.0% 2025-09-07T08:22:29.9677431Z _linear_pointwise 0.0887 ms 46.8% 2025-09-07T08:22:29.9677815Z SingleProcess AUTOTUNE benchmarking takes 0.2795 seconds and 1.3436 seconds precompiling for 2 choices 2025-09-07T08:22:32.2687665Z Autotune Choices Stats: 2025-09-07T08:22:32.2688135Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_3", "best_time": 0.11146700035169488} 2025-09-07T08:22:32.2701172Z AUTOTUNE bmm(8x1024x1024, 8x1024x64) 2025-09-07T08:22:32.2701453Z strides: [1048576, 1024, 1], [64, 512, 1] 2025-09-07T08:22:32.2702224Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:22:32.2702557Z cpp_CppMicroGemmAMX_3 0.1115 ms 100.0% 2025-09-07T08:22:32.2702828Z bmm 0.5746 ms 19.4% 2025-09-07T08:22:32.2703196Z SingleProcess AUTOTUNE benchmarking takes 0.4477 seconds and 1.3930 seconds precompiling for 2 choices 2025-09-07T08:22:34.6048006Z Autotune Choices Stats: 2025-09-07T08:22:34.6049110Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_11", "best_time": 0.1121280001825653} 2025-09-07T08:22:34.6060632Z AUTOTUNE linear_unary(1024x512, 2048x512) 2025-09-07T08:22:34.6060972Z strides: [512, 1], [1, 0] 2025-09-07T08:22:34.6061215Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:22:34.6061482Z cpp_CppMicroGemmAMX_11 0.1121 ms 100.0% 2025-09-07T08:22:34.6061734Z _linear_pointwise 0.1798 ms 62.4% 2025-09-07T08:22:34.6062158Z SingleProcess AUTOTUNE benchmarking takes 0.3073 seconds and 1.3364 seconds precompiling for 2 choices 2025-09-07T08:22:36.3345631Z Autotune Choices Stats: 2025-09-07T08:22:36.3346428Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_12", "best_time": 0.12293199984014791} 2025-09-07T08:22:36.3356501Z AUTOTUNE linear_unary(1024x2048, 512x2048) 2025-09-07T08:22:36.3356787Z strides: [2048, 1], [1, 0] 2025-09-07T08:22:36.3357024Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:22:36.3357306Z cpp_CppMicroGemmAMX_12 0.1229 ms 100.0% 2025-09-07T08:22:36.3357553Z _linear_pointwise 0.1659 ms 74.1% 2025-09-07T08:22:36.3357932Z SingleProcess AUTOTUNE benchmarking takes 0.3080 seconds and 1.3461 seconds precompiling for 2 choices 2025-09-07T08:22:47.0125301Z Autotune Choices Stats: 2025-09-07T08:22:47.0125773Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_114", "best_time": 2.2017800001776777} 2025-09-07T08:22:47.0146103Z AUTOTUNE linear_unary(1024x512, 32128x512) 2025-09-07T08:22:47.0146823Z strides: [512, 1], [1, 0] 2025-09-07T08:22:47.0147143Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:22:47.0147405Z cpp_CppMicroGemmAMX_114 2.2018 ms 100.0% 2025-09-07T08:22:47.0147659Z _linear_pointwise 10.9949 ms 20.0% 2025-09-07T08:22:47.0148065Z SingleProcess AUTOTUNE benchmarking takes 0.9814 seconds and 1.3314 seconds precompiling for 2 choices 2025-09-07T08:22:48.3803671Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3804319Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3804694Z return mod(**inputs) 2025-09-07T08:22:48.3805133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.3805568Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.3805979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3806409Z layer_outputs = layer_module( 2025-09-07T08:22:48.3806805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3807567Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3808004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3808446Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3808851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3809229Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3809609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward 2025-09-07T08:22:48.3810000Z position_bias = position_bias + causal_mask 2025-09-07T08:22:48.3810503Z 2025-09-07T08:22:48.3810673Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3810916Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3811194Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3811614Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3811989Z return mod(**inputs) 2025-09-07T08:22:48.3812408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.3812844Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.3813230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3813613Z layer_outputs = layer_module( 2025-09-07T08:22:48.3813977Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3814367Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3814784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3815172Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3815608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3815997Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3816382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3816835Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3817038Z 2025-09-07T08:22:48.3817128Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3817360Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3817582Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3817841Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3818234Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3818574Z return mod(**inputs) 2025-09-07T08:22:48.3818938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.3819319Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.3819695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3820075Z layer_outputs = layer_module( 2025-09-07T08:22:48.3820436Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3820800Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3821196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3821600Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3822005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3822389Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3822768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.3823190Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.3823363Z 2025-09-07T08:22:48.3823469Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3823841Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3824178Z return mod(**inputs) 2025-09-07T08:22:48.3824525Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.3824920Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.3825339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3825769Z layer_outputs = layer_module( 2025-09-07T08:22:48.3826150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3826554Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3826976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3827413Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3827871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3828249Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3828628Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.3829040Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.3829205Z 2025-09-07T08:22:48.3829296Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3829516Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3829725Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3829987Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3830352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3830685Z return mod(**inputs) 2025-09-07T08:22:48.3831035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3831410Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3831797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3832200Z layer_outputs = layer_module( 2025-09-07T08:22:48.3832566Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3832959Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3833367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3833769Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3834171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3834553Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3834954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3835402Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3835595Z 2025-09-07T08:22:48.3835716Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3836100Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3836449Z return mod(**inputs) 2025-09-07T08:22:48.3836820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3837220Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3837613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3838004Z layer_outputs = layer_module( 2025-09-07T08:22:48.3838372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3838760Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3839157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3839559Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3839974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3840401Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3840803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:22:48.3841284Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:22:48.3841508Z 2025-09-07T08:22:48.3841606Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3841882Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3842122Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3842379Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3842766Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3843119Z return mod(**inputs) 2025-09-07T08:22:48.3843509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3843929Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3844334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3844766Z layer_outputs = layer_module( 2025-09-07T08:22:48.3845380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3845794Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3846217Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3846677Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3847221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3847660Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3848092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.3848548Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.3848729Z 2025-09-07T08:22:48.3849028Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3849412Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3849779Z return mod(**inputs) 2025-09-07T08:22:48.3850154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3850556Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3850962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3851361Z layer_outputs = layer_module( 2025-09-07T08:22:48.3851717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3852094Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3852476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3852931Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3853310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3853690Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3854071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.3854473Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.3854635Z 2025-09-07T08:22:48.3854718Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3854936Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3855220Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3855461Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3855667Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3855881Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3856124Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3856497Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3856824Z return mod(**inputs) 2025-09-07T08:22:48.3857930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3858325Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3858699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3859070Z layer_outputs = layer_module( 2025-09-07T08:22:48.3859415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3859789Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3860164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3860575Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3860956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3861334Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3861721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3862151Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3862344Z 2025-09-07T08:22:48.3862465Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3862859Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3863202Z return mod(**inputs) 2025-09-07T08:22:48.3863569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3863975Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3864367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3864737Z layer_outputs = layer_module( 2025-09-07T08:22:48.3865097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3865479Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3865891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3866272Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3866647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3867040Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3867423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:22:48.3867881Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:22:48.3868097Z 2025-09-07T08:22:48.3868182Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3868404Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3868628Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3868866Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3869221Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3869541Z return mod(**inputs) 2025-09-07T08:22:48.3869886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3870301Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3870668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3871034Z layer_outputs = layer_module( 2025-09-07T08:22:48.3871390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3871758Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3872163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3872534Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3872896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3873274Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3873649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.3874053Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.3874213Z 2025-09-07T08:22:48.3874316Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3874699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3875027Z return mod(**inputs) 2025-09-07T08:22:48.3875374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3875744Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3876098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3876475Z layer_outputs = layer_module( 2025-09-07T08:22:48.3876848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3877242Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3877642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3878042Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3878449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3878824Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3879196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.3879587Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.3879761Z 2025-09-07T08:22:48.3879844Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3880063Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3880276Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3880490Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3880698Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3880937Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3881305Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3881648Z return mod(**inputs) 2025-09-07T08:22:48.3882015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3882409Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3882785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3883160Z layer_outputs = layer_module( 2025-09-07T08:22:48.3883507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3883875Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3884330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3884713Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3885088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3885483Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3885880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3886346Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3886539Z 2025-09-07T08:22:48.3886660Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3887163Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3887553Z return mod(**inputs) 2025-09-07T08:22:48.3887944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3888361Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3888758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3889132Z layer_outputs = layer_module( 2025-09-07T08:22:48.3889490Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3889866Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3890250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3890636Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3891015Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3891401Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3891788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:22:48.3892246Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:22:48.3892460Z 2025-09-07T08:22:48.3892553Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3892770Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3892990Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3893237Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3893611Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3893936Z return mod(**inputs) 2025-09-07T08:22:48.3894295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3894680Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3895065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3895438Z layer_outputs = layer_module( 2025-09-07T08:22:48.3895800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3896171Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3896556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3896941Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3897321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3897710Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3898093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.3898525Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.3898726Z 2025-09-07T08:22:48.3898838Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3899203Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3899542Z return mod(**inputs) 2025-09-07T08:22:48.3899898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3900290Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3900676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3901054Z layer_outputs = layer_module( 2025-09-07T08:22:48.3901422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3901816Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3902226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3902629Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3903025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3903419Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3903814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.3904270Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.3904445Z 2025-09-07T08:22:48.3904533Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3904766Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3904995Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3905218Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3905434Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3905691Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3906081Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3906437Z return mod(**inputs) 2025-09-07T08:22:48.3906825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3907239Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3907647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3908053Z layer_outputs = layer_module( 2025-09-07T08:22:48.3908432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3908820Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3909244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3909670Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3910072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3910475Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3910881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3911334Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3911528Z 2025-09-07T08:22:48.3911650Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3912040Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3912390Z return mod(**inputs) 2025-09-07T08:22:48.3912779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3913211Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3913625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3913987Z layer_outputs = layer_module( 2025-09-07T08:22:48.3914324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3914683Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3915067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3915451Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3915830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3916219Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3916601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:22:48.3917065Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:22:48.3917285Z 2025-09-07T08:22:48.3917374Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3917600Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3917815Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3918054Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3918430Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3918778Z return mod(**inputs) 2025-09-07T08:22:48.3919165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3919567Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3919961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3920338Z layer_outputs = layer_module( 2025-09-07T08:22:48.3920682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3921064Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3921467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3921872Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3922271Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3922677Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3923077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.3923510Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.3923684Z 2025-09-07T08:22:48.3923807Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3924188Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3924542Z return mod(**inputs) 2025-09-07T08:22:48.3924916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3925320Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3925711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3926103Z layer_outputs = layer_module( 2025-09-07T08:22:48.3926482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3926872Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3927365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3927815Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3928221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3928634Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3929043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.3929487Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.3929662Z 2025-09-07T08:22:48.3929769Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3930010Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3930238Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3930462Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3930748Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3931153Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3931523Z return mod(**inputs) 2025-09-07T08:22:48.3931897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3932307Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3932720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3933131Z layer_outputs = layer_module( 2025-09-07T08:22:48.3933510Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3933917Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3934328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3934748Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3935165Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3935590Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3936000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3936463Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3936666Z 2025-09-07T08:22:48.3936778Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3937176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3937537Z return mod(**inputs) 2025-09-07T08:22:48.3937917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3938331Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3938738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3939146Z layer_outputs = layer_module( 2025-09-07T08:22:48.3939503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3939870Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3940256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3940647Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3941054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3941475Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3941853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:22:48.3942317Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:22:48.3942559Z 2025-09-07T08:22:48.3942662Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3942885Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3943098Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3943343Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3943715Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3944055Z return mod(**inputs) 2025-09-07T08:22:48.3944438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3944816Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3945407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3945794Z layer_outputs = layer_module( 2025-09-07T08:22:48.3946153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3946522Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3946902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3947286Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3947723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3948118Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3948492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.3948902Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.3949074Z 2025-09-07T08:22:48.3949179Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3949548Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3949884Z return mod(**inputs) 2025-09-07T08:22:48.3950243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3950648Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3951044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3951450Z layer_outputs = layer_module( 2025-09-07T08:22:48.3951798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3952170Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3952547Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3952943Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3953349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3953727Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3954109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.3954525Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.3954688Z 2025-09-07T08:22:48.3954778Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3954998Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3955215Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3955443Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3955666Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3955918Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3956279Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3956613Z return mod(**inputs) 2025-09-07T08:22:48.3956968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3957418Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3957784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3958163Z layer_outputs = layer_module( 2025-09-07T08:22:48.3958518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3958889Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3959296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3959672Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3960063Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3960461Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3960853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3961285Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3961490Z 2025-09-07T08:22:48.3961626Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3962015Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3962369Z return mod(**inputs) 2025-09-07T08:22:48.3962745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3963116Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3963487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3963858Z layer_outputs = layer_module( 2025-09-07T08:22:48.3964213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3964585Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3964952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3965341Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3965742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3966144Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3966541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:22:48.3967078Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:22:48.3967323Z 2025-09-07T08:22:48.3967413Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3967646Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3967879Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3968129Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3968520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3968880Z return mod(**inputs) 2025-09-07T08:22:48.3969239Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3969614Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3969993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3970389Z layer_outputs = layer_module( 2025-09-07T08:22:48.3970768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3971158Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3971576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3972002Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3972384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3972765Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3973133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.3973562Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.3973735Z 2025-09-07T08:22:48.3973841Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3974215Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3974556Z return mod(**inputs) 2025-09-07T08:22:48.3974911Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:22:48.3975304Z encoder_outputs = self.encoder( 2025-09-07T08:22:48.3975690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3976084Z layer_outputs = layer_module( 2025-09-07T08:22:48.3976430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3976786Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3977157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.3977534Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.3977913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.3978295Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.3978676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.3979088Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.3979262Z 2025-09-07T08:22:48.3979352Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3979567Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3979770Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3980005Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3980366Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3980703Z return mod(**inputs) 2025-09-07T08:22:48.3981058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.3981443Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.3981818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3982199Z layer_outputs = layer_module( 2025-09-07T08:22:48.3982556Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.3982936Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.3983340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.3983756Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.3984160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.3984563Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.3984967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.3985399Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.3985623Z 2025-09-07T08:22:48.3985713Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3985933Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3986142Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.3986386Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.3986753Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.3987087Z return mod(**inputs) 2025-09-07T08:22:48.3987450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.3987829Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.3988202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.3988578Z layer_outputs = layer_module( 2025-09-07T08:22:48.3988936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4001243Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4001859Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4002393Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4002808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4003206Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4003612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4004034Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4004212Z 2025-09-07T08:22:48.4004339Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4004724Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4005074Z return mod(**inputs) 2025-09-07T08:22:48.4005451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4005843Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4006229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4006619Z layer_outputs = layer_module( 2025-09-07T08:22:48.4007079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4007500Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4007917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4008349Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4008765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4009199Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4009591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4010012Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4010180Z 2025-09-07T08:22:48.4010279Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4010498Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4010720Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4010935Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4011150Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4011390Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4011767Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4012108Z return mod(**inputs) 2025-09-07T08:22:48.4012526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4012932Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4013298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4013670Z layer_outputs = layer_module( 2025-09-07T08:22:48.4014029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4014438Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4014813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4015208Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4015595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4015990Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4016377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4016832Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4017057Z 2025-09-07T08:22:48.4017146Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4017376Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4017590Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4017828Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4018201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4018538Z return mod(**inputs) 2025-09-07T08:22:48.4018900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4019283Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4019654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4020036Z layer_outputs = layer_module( 2025-09-07T08:22:48.4020423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4020821Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4021194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4021579Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4022012Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4022404Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4022791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4023201Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4023379Z 2025-09-07T08:22:48.4023489Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4023858Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4024191Z return mod(**inputs) 2025-09-07T08:22:48.4024544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4024919Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4025294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4025680Z layer_outputs = layer_module( 2025-09-07T08:22:48.4026054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4026439Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4026865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4027291Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4027697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4028083Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4028457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4028896Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4029072Z 2025-09-07T08:22:48.4029158Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4029387Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4029604Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4029855Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4030235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4030590Z return mod(**inputs) 2025-09-07T08:22:48.4030946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4031333Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4031696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4032061Z layer_outputs = layer_module( 2025-09-07T08:22:48.4032409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4032772Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4033154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4033534Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4033919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4034304Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4034681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4035117Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4035304Z 2025-09-07T08:22:48.4035385Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4035600Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4035802Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4036035Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4036400Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4036736Z return mod(**inputs) 2025-09-07T08:22:48.4037097Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4037473Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4037848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4038229Z layer_outputs = layer_module( 2025-09-07T08:22:48.4038584Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4038946Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4039327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4039712Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4040094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4040478Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4040888Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4041313Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4041501Z 2025-09-07T08:22:48.4041617Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4042007Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4042360Z return mod(**inputs) 2025-09-07T08:22:48.4042754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4043158Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4043545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4043954Z layer_outputs = layer_module( 2025-09-07T08:22:48.4044332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4044724Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4045379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4045863Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4046266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4046673Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4047145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4047585Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4047762Z 2025-09-07T08:22:48.4047850Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4048079Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4048308Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4048535Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4048751Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4049007Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4049401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4049759Z return mod(**inputs) 2025-09-07T08:22:48.4050132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4050549Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4050943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4051354Z layer_outputs = layer_module( 2025-09-07T08:22:48.4051727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4052115Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4052514Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4052919Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4053321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4053720Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4054125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4054577Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4054772Z 2025-09-07T08:22:48.4054867Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4055096Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4055315Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4055554Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4056005Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4056333Z return mod(**inputs) 2025-09-07T08:22:48.4056676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4057047Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4057408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4057800Z layer_outputs = layer_module( 2025-09-07T08:22:48.4058150Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4058505Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4058877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4059252Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4059627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4059995Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4060378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4060777Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4060934Z 2025-09-07T08:22:48.4061046Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4061406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4061730Z return mod(**inputs) 2025-09-07T08:22:48.4062084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4062461Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4062835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4063242Z layer_outputs = layer_module( 2025-09-07T08:22:48.4063581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4063940Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4064303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4064678Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4065043Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4065427Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4065805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4066218Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4066381Z 2025-09-07T08:22:48.4066472Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4066686Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4066933Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4067302Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4067634Z return mod(**inputs) 2025-09-07T08:22:48.4067981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4068364Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4068735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4069090Z layer_outputs = layer_module( 2025-09-07T08:22:48.4069425Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4069812Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4070180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4070566Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4070927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4071298Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4071681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4072098Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4072282Z 2025-09-07T08:22:48.4072364Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4072580Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4072782Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4073020Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4073375Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4073699Z return mod(**inputs) 2025-09-07T08:22:48.4074055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4074422Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4074781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4075144Z layer_outputs = layer_module( 2025-09-07T08:22:48.4075487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4075836Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4076203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4076576Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4076945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4077318Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4077682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4078081Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4078246Z 2025-09-07T08:22:48.4078350Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4078704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4079018Z return mod(**inputs) 2025-09-07T08:22:48.4079361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4079731Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4080089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4080452Z layer_outputs = layer_module( 2025-09-07T08:22:48.4080793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4081156Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4081522Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4081895Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4082266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4082636Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4083020Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4083458Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4083622Z 2025-09-07T08:22:48.4083712Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4083927Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4084146Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4084360Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4084571Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4084820Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4085191Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4085527Z return mod(**inputs) 2025-09-07T08:22:48.4085881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4086261Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4086641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4087115Z layer_outputs = layer_module( 2025-09-07T08:22:48.4087524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4087934Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4088351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4088796Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4089226Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4089650Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4090068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4090528Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4090735Z 2025-09-07T08:22:48.4090825Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4091060Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4091300Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4091556Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4091951Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4092312Z return mod(**inputs) 2025-09-07T08:22:48.4092701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4093110Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4093505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4093923Z layer_outputs = layer_module( 2025-09-07T08:22:48.4094308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4094706Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4095105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4095529Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4095924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4096309Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4096690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4097090Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4097262Z 2025-09-07T08:22:48.4097368Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4097745Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4098116Z return mod(**inputs) 2025-09-07T08:22:48.4098458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4098823Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4099188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4099558Z layer_outputs = layer_module( 2025-09-07T08:22:48.4099961Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4100328Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4100708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4101085Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4101473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4101868Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4102252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4102650Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4102820Z 2025-09-07T08:22:48.4102910Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4103115Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4103322Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4103554Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4103912Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4104228Z return mod(**inputs) 2025-09-07T08:22:48.4104576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4104949Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4105315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4105674Z layer_outputs = layer_module( 2025-09-07T08:22:48.4106018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4106374Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4106754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4107134Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4107504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4107887Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4108272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4108696Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4108873Z 2025-09-07T08:22:48.4108961Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4109163Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4109370Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4109604Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4109970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4110295Z return mod(**inputs) 2025-09-07T08:22:48.4110646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4111026Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4111394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4111810Z layer_outputs = layer_module( 2025-09-07T08:22:48.4112156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4112531Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4112915Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4113307Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4113703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4114089Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4114471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4114880Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4115047Z 2025-09-07T08:22:48.4115163Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4115524Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4115877Z return mod(**inputs) 2025-09-07T08:22:48.4116230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4116612Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4116982Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4117359Z layer_outputs = layer_module( 2025-09-07T08:22:48.4117714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4118087Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4118468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4118844Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4119228Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4119619Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4120005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4120414Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4120581Z 2025-09-07T08:22:48.4120663Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4120885Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4121100Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4121315Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4121551Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4121933Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4122269Z return mod(**inputs) 2025-09-07T08:22:48.4122626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4122999Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4123373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4123757Z layer_outputs = layer_module( 2025-09-07T08:22:48.4124146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4124549Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4124943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4125358Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4125802Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4126218Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4126616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4127139Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4127355Z 2025-09-07T08:22:48.4127445Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4127704Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4127937Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4128194Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4128597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4128965Z return mod(**inputs) 2025-09-07T08:22:48.4129338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4129714Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4130091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4130526Z layer_outputs = layer_module( 2025-09-07T08:22:48.4130918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4131323Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4131731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4132153Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4132588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4133023Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4133437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4133898Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4134083Z 2025-09-07T08:22:48.4134202Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4134605Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4134972Z return mod(**inputs) 2025-09-07T08:22:48.4135358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4135790Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4136185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4136588Z layer_outputs = layer_module( 2025-09-07T08:22:48.4136968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4137360Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4137774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4138187Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4138599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4139007Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4139422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4139864Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4140037Z 2025-09-07T08:22:48.4140132Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4140362Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4140618Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4140874Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4141273Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4141623Z return mod(**inputs) 2025-09-07T08:22:48.4141992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4142400Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4142816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4143225Z layer_outputs = layer_module( 2025-09-07T08:22:48.4143599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4143997Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4144405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4144824Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4145422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4145865Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4146254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4146672Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4146855Z 2025-09-07T08:22:48.4146946Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4147166Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4147375Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4147619Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4147990Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4148334Z return mod(**inputs) 2025-09-07T08:22:48.4148689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4149059Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4149429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4149796Z layer_outputs = layer_module( 2025-09-07T08:22:48.4150142Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4150501Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4150877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4151258Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4151640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4152029Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4152409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4152808Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4152976Z 2025-09-07T08:22:48.4153079Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4153442Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4153760Z return mod(**inputs) 2025-09-07T08:22:48.4154107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4154478Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4154841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4155259Z layer_outputs = layer_module( 2025-09-07T08:22:48.4155604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4155984Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4156352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4156735Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4157133Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4157519Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4157901Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4158314Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4158480Z 2025-09-07T08:22:48.4158571Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4158792Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4159000Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4159205Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4159467Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4159821Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4160144Z return mod(**inputs) 2025-09-07T08:22:48.4160491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4160870Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4161244Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4161615Z layer_outputs = layer_module( 2025-09-07T08:22:48.4161970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4162344Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4162723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4163104Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4163487Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4163877Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4164261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4164689Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4164873Z 2025-09-07T08:22:48.4164959Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4165190Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4165418Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4165676Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4166052Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4166403Z return mod(**inputs) 2025-09-07T08:22:48.4166777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4167252Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4167661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4168060Z layer_outputs = layer_module( 2025-09-07T08:22:48.4168432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4168804Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4169181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4169619Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4169994Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4170377Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4170759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4171183Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4171349Z 2025-09-07T08:22:48.4171454Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4171823Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4172164Z return mod(**inputs) 2025-09-07T08:22:48.4172516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4172897Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4173258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4173649Z layer_outputs = layer_module( 2025-09-07T08:22:48.4174001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4174369Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4174742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:22:48.4175124Z self_attention_outputs = self.layer[0]( 2025-09-07T08:22:48.4175498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:22:48.4175882Z attention_output = self.SelfAttention( 2025-09-07T08:22:48.4176262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4176671Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4176852Z 2025-09-07T08:22:48.4176938Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4177175Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4177405Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4177639Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4178006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4178341Z return mod(**inputs) 2025-09-07T08:22:48.4178698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4179078Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4179443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4179823Z layer_outputs = layer_module( 2025-09-07T08:22:48.4180177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4180548Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4180927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4181303Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4181674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4182052Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4182428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:22:48.4182845Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:22:48.4183059Z 2025-09-07T08:22:48.4183159Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4183376Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4183647Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4183892Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4184262Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4184589Z return mod(**inputs) 2025-09-07T08:22:48.4184954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4185328Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4185679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4186043Z layer_outputs = layer_module( 2025-09-07T08:22:48.4186386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4186749Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4187112Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4187492Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4187870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4188252Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4188629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:22:48.4189026Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:22:48.4189201Z 2025-09-07T08:22:48.4189308Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4189521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4189594Z return mod(**inputs) 2025-09-07T08:22:48.4189837Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:22:48.4189925Z decoder_outputs = self.decoder( 2025-09-07T08:22:48.4190168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:22:48.4190252Z layer_outputs = layer_module( 2025-09-07T08:22:48.4190485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:22:48.4190574Z return super().__call__(*args, **kwargs) 2025-09-07T08:22:48.4190811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:22:48.4190895Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:22:48.4191141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:22:48.4191232Z attention_output = self.EncDecAttention( 2025-09-07T08:22:48.4191476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:22:48.4191599Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:22:48.4191603Z 2025-09-07T08:22:48.4191685Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4191775Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4191855Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4191941Z cudagraph partition due to non gpu ops 2025-09-07T08:22:48.4192045Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:22:48.4192248Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:22:48.4192323Z return mod(**inputs) 2025-09-07T08:22:48.4192559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1798, in forward 2025-09-07T08:22:48.4192744Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-09-07T08:22:48.4192748Z 2025-09-07T08:23:02.4440555Z Compilation time (from dynamo_timed): 51.61443594 2025-09-07T08:23:02.4464312Z pass 2025-09-07T08:23:02.4464795Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:23:02.4470714Z TIMING: _recursive_pre_grad_passes:0.05626 _recursive_joint_graph_passes:0.59403 _recursive_post_grad_passes:0.21006 linear_unary_template_precompiling:5.37418 linear_unary_template_autotuning:1.8675 bmm_template_precompiling:1.39594 bmm_template_autotuning:0.4456 async_compile.wait:0.76727 code_gen:14.15636 inductor_compile:41.68479 backend_compile:48.71181 gc:0.00043 entire_frame_compile:51.61444 total_wall_time:51.61444 2025-09-07T08:23:02.4472119Z STATS: call_* op count: 824 | FakeTensorMode.__torch_dispatch__:38720 | FakeTensor.__torch_dispatch__:4678 | ProxyTorchDispatchMode.__torch_dispatch__:10688 2025-09-07T08:23:02.4472636Z Dynamo produced 1 graphs covering 824 ops with 0 graph breaks (0 unique) 2025-09-07T08:23:05.4758661Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:23:05.4759553Z import pynvml # type: ignore[import] 2025-09-07T08:23:08.0890986Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:23:08.0892040Z from pkg_resources import resource_filename 2025-09-07T08:23:08.7523709Z 2025-09-07T08:23:09.8427098Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:23:09.8427441Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:23:09.8427684Z cpu eval T5Small 2025-09-07T08:23:11.2682216Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:23:11.6734675Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:23:12.2180391Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:23:41.8674060Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8674726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8675152Z return mod(**inputs) 2025-09-07T08:23:41.8681268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8681917Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8682945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8683490Z layer_outputs = layer_module( 2025-09-07T08:23:41.8683967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8684392Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8684812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8685255Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8685674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8686100Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8686521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 546, in forward 2025-09-07T08:23:41.8687650Z position_bias = position_bias + causal_mask 2025-09-07T08:23:41.8687843Z 2025-09-07T08:23:41.8687955Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8688195Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8688460Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8688855Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8689202Z return mod(**inputs) 2025-09-07T08:23:41.8689684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8690169Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8690589Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8691012Z layer_outputs = layer_module( 2025-09-07T08:23:41.8691414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8691843Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8692259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8692749Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8693184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8693616Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8694049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8694530Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8694740Z 2025-09-07T08:23:41.8694833Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8695071Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8695317Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8695580Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8695994Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8696350Z return mod(**inputs) 2025-09-07T08:23:41.8696741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8697162Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8697568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8697983Z layer_outputs = layer_module( 2025-09-07T08:23:41.8698365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8698771Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8699151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8699553Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8699938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8700334Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8700730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8701174Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8701363Z 2025-09-07T08:23:41.8701480Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8701877Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8702237Z return mod(**inputs) 2025-09-07T08:23:41.8702626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8703059Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8703434Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8703823Z layer_outputs = layer_module( 2025-09-07T08:23:41.8704178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8704553Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8704993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8705400Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8705956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8706371Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8706772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8707270Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8707445Z 2025-09-07T08:23:41.8707529Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8707768Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8707986Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8708217Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8708586Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8708919Z return mod(**inputs) 2025-09-07T08:23:41.8709276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8709652Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8710022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8710403Z layer_outputs = layer_module( 2025-09-07T08:23:41.8710759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8711128Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8711517Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8711884Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8712251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8712642Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8713071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8713507Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8713700Z 2025-09-07T08:23:41.8713809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8714186Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8714526Z return mod(**inputs) 2025-09-07T08:23:41.8714875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8715262Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8715655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8716051Z layer_outputs = layer_module( 2025-09-07T08:23:41.8716432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8716796Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8717173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8717593Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8717972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8718412Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8718784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:23:41.8719232Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:23:41.8719443Z 2025-09-07T08:23:41.8719587Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8719823Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8720044Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8720300Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8720687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8721044Z return mod(**inputs) 2025-09-07T08:23:41.8721415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8721819Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8722234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8722631Z layer_outputs = layer_module( 2025-09-07T08:23:41.8723005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8723388Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8723785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8724188Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8724587Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8724987Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8725390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8725894Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8726078Z 2025-09-07T08:23:41.8726200Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8726599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8726955Z return mod(**inputs) 2025-09-07T08:23:41.8727435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8727854Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8728254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8728657Z layer_outputs = layer_module( 2025-09-07T08:23:41.8729061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8729461Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8729878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8730290Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8730707Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8731128Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8731543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8731989Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8732171Z 2025-09-07T08:23:41.8732261Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8732543Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8732777Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8733008Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8733231Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8733464Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8733725Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8734125Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8734498Z return mod(**inputs) 2025-09-07T08:23:41.8734884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8735297Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8735702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8736117Z layer_outputs = layer_module( 2025-09-07T08:23:41.8736497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8736881Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8737260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8737628Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8737995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8738359Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8738729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8739142Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8739320Z 2025-09-07T08:23:41.8739433Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8739786Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8740114Z return mod(**inputs) 2025-09-07T08:23:41.8740464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8740842Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8741211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8741582Z layer_outputs = layer_module( 2025-09-07T08:23:41.8741937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8742313Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8742681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8743052Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8743417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8743787Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8744161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:23:41.8744605Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:23:41.8744815Z 2025-09-07T08:23:41.8744898Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8745371Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8745592Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8745843Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8746205Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8746525Z return mod(**inputs) 2025-09-07T08:23:41.8746930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8747330Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8747693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8748054Z layer_outputs = layer_module( 2025-09-07T08:23:41.8748406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8748800Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8749178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8749558Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8749924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8750304Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8750677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8751080Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8751247Z 2025-09-07T08:23:41.8751375Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8751723Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8752038Z return mod(**inputs) 2025-09-07T08:23:41.8752375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8752740Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8753086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8753448Z layer_outputs = layer_module( 2025-09-07T08:23:41.8753784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8754136Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8754496Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8754852Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8755212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8755576Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8755939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8756323Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8756490Z 2025-09-07T08:23:41.8756568Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8756777Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8756986Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8757192Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8757390Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8757619Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8757971Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8758290Z return mod(**inputs) 2025-09-07T08:23:41.8758627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8758992Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8759442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8759815Z layer_outputs = layer_module( 2025-09-07T08:23:41.8760157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8760585Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8761010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8761400Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8761784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8762185Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8762652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8763110Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8763304Z 2025-09-07T08:23:41.8763428Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8763820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8764188Z return mod(**inputs) 2025-09-07T08:23:41.8764565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8764978Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8765390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8765797Z layer_outputs = layer_module( 2025-09-07T08:23:41.8766175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8766566Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8766973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8767469Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8767890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8768323Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8768712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:23:41.8769167Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:23:41.8769372Z 2025-09-07T08:23:41.8769459Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8769692Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8769931Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8770195Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8770596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8770965Z return mod(**inputs) 2025-09-07T08:23:41.8771367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8771790Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8772198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8772617Z layer_outputs = layer_module( 2025-09-07T08:23:41.8772999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8773403Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8773818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8774246Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8774652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8775081Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8775498Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8776000Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8776181Z 2025-09-07T08:23:41.8776301Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8776699Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8777080Z return mod(**inputs) 2025-09-07T08:23:41.8777482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8777898Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8778290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8778666Z layer_outputs = layer_module( 2025-09-07T08:23:41.8779301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8779710Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8780116Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8780528Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8780962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8781350Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8781738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8782148Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8782313Z 2025-09-07T08:23:41.8782397Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8782622Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8782849Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8783077Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8783295Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8783549Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8783939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8784294Z return mod(**inputs) 2025-09-07T08:23:41.8784667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8785062Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8785438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8785812Z layer_outputs = layer_module( 2025-09-07T08:23:41.8786166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8786529Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8786913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8787297Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8787676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8788050Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8788432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8788859Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8789041Z 2025-09-07T08:23:41.8789154Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8789521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8789847Z return mod(**inputs) 2025-09-07T08:23:41.8790211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8790617Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8790986Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8791363Z layer_outputs = layer_module( 2025-09-07T08:23:41.8791714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8792076Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8792460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8792829Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8793191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8793566Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8793937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:23:41.8794381Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:23:41.8794602Z 2025-09-07T08:23:41.8794706Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8794914Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8795124Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8795365Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8795725Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8796043Z return mod(**inputs) 2025-09-07T08:23:41.8796393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8796772Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8797151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8797537Z layer_outputs = layer_module( 2025-09-07T08:23:41.8797887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8798265Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8798648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8799036Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8799441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8799822Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8800202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8800615Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8800776Z 2025-09-07T08:23:41.8800886Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8801233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8801556Z return mod(**inputs) 2025-09-07T08:23:41.8801902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8802311Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8802686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8803052Z layer_outputs = layer_module( 2025-09-07T08:23:41.8803405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8803776Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8804155Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8804580Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8804962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8805348Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8805745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8806203Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8806381Z 2025-09-07T08:23:41.8806469Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8806701Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8806934Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8807244Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8807494Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8807887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8808244Z return mod(**inputs) 2025-09-07T08:23:41.8808644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8809057Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8809448Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8809857Z layer_outputs = layer_module( 2025-09-07T08:23:41.8810234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8810636Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8811031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8811444Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8811865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8812275Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8812677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8813128Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8813329Z 2025-09-07T08:23:41.8813444Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8813830Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8814180Z return mod(**inputs) 2025-09-07T08:23:41.8814555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8814958Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8815351Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8815740Z layer_outputs = layer_module( 2025-09-07T08:23:41.8816085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8816444Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8816818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8817200Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8817579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8817961Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8818333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:23:41.8818807Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:23:41.8819042Z 2025-09-07T08:23:41.8819126Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8819350Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8819558Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8819799Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8820167Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8820491Z return mod(**inputs) 2025-09-07T08:23:41.8820856Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8821219Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8821581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8821951Z layer_outputs = layer_module( 2025-09-07T08:23:41.8822299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8822665Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8823070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8823456Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8823840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8824234Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8824599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8825003Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8825172Z 2025-09-07T08:23:41.8825280Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8825653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8825983Z return mod(**inputs) 2025-09-07T08:23:41.8826324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8826699Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8827061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8827430Z layer_outputs = layer_module( 2025-09-07T08:23:41.8827768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8828130Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8828504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8828881Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8829254Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8829621Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8829993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8830394Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8830554Z 2025-09-07T08:23:41.8830642Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8830856Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8831062Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8831270Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8831478Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8831711Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8832059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8832439Z return mod(**inputs) 2025-09-07T08:23:41.8832788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8833162Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8833521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8833889Z layer_outputs = layer_module( 2025-09-07T08:23:41.8834251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8834618Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8834988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8835356Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8835731Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8836122Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8836544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8836983Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8837173Z 2025-09-07T08:23:41.8837290Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8837653Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8837990Z return mod(**inputs) 2025-09-07T08:23:41.8838326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8838679Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8839033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8839396Z layer_outputs = layer_module( 2025-09-07T08:23:41.8839730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8840085Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8840439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8840801Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8841161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8841527Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8841883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 558, in forward 2025-09-07T08:23:41.8842315Z attn_weights = nn.functional.softmax(scores.float(), dim=-1).type_as(scores) 2025-09-07T08:23:41.8842520Z 2025-09-07T08:23:41.8842605Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8842823Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8843034Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8843260Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8843616Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8843942Z return mod(**inputs) 2025-09-07T08:23:41.8844288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8844651Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8845207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8845656Z layer_outputs = layer_module( 2025-09-07T08:23:41.8846033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8846525Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8846936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8847491Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8847912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8848328Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8848761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8849159Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8849329Z 2025-09-07T08:23:41.8849433Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8849788Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8850111Z return mod(**inputs) 2025-09-07T08:23:41.8850446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1725, in forward 2025-09-07T08:23:41.8850818Z encoder_outputs = self.encoder( 2025-09-07T08:23:41.8851206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8851576Z layer_outputs = layer_module( 2025-09-07T08:23:41.8851921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8852279Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8852653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8853068Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8853438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8853811Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8854188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8854591Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8854751Z 2025-09-07T08:23:41.8854841Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8855053Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8855258Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8855495Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8855854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8856181Z return mod(**inputs) 2025-09-07T08:23:41.8856530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8856911Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8857294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8857672Z layer_outputs = layer_module( 2025-09-07T08:23:41.8858031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8858396Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8858781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8859157Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8859527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8859908Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8860284Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8860758Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8860940Z 2025-09-07T08:23:41.8861029Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8861245Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8861456Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8861697Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8862069Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8862429Z return mod(**inputs) 2025-09-07T08:23:41.8862772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8863144Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8863520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8863899Z layer_outputs = layer_module( 2025-09-07T08:23:41.8864263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8864630Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8865027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8865408Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8865781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8866161Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8866534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8866940Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8867108Z 2025-09-07T08:23:41.8867213Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8867579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8867904Z return mod(**inputs) 2025-09-07T08:23:41.8868256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8868636Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8868998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8869361Z layer_outputs = layer_module( 2025-09-07T08:23:41.8869698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8870054Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8870419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8870791Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8871157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8871530Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8871900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8872294Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8872454Z 2025-09-07T08:23:41.8872543Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8872751Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8872962Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8873178Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8873390Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8873622Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8873989Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8874362Z return mod(**inputs) 2025-09-07T08:23:41.8874722Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8875113Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8875472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8875842Z layer_outputs = layer_module( 2025-09-07T08:23:41.8876215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8876584Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8876953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8877336Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8877715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8878113Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8878486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8878932Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8879123Z 2025-09-07T08:23:41.8879206Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8879431Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8879653Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8879895Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8880266Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8880604Z return mod(**inputs) 2025-09-07T08:23:41.8880966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8881356Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8881737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8882141Z layer_outputs = layer_module( 2025-09-07T08:23:41.8882520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8882919Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8883320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8883732Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8884141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8884562Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8884971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8885408Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8885593Z 2025-09-07T08:23:41.8885709Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8886107Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8886464Z return mod(**inputs) 2025-09-07T08:23:41.8886845Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8887390Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8887805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8888230Z layer_outputs = layer_module( 2025-09-07T08:23:41.8888608Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8889051Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8889432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8889816Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8890198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8890588Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8890985Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8891397Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8891571Z 2025-09-07T08:23:41.8891655Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8891879Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8892089Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8892335Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8892703Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8893038Z return mod(**inputs) 2025-09-07T08:23:41.8893414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8893795Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8894187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8894558Z layer_outputs = layer_module( 2025-09-07T08:23:41.8894895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8895246Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8895611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8895982Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8896358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8896740Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8897115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8897531Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8897716Z 2025-09-07T08:23:41.8897808Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8898016Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8898221Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8898443Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8898792Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8899113Z return mod(**inputs) 2025-09-07T08:23:41.8899455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8899809Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8900166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8900525Z layer_outputs = layer_module( 2025-09-07T08:23:41.8900861Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8901206Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8901564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8901930Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8902294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8902699Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8903068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8903470Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8903632Z 2025-09-07T08:23:41.8903734Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8904082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8904422Z return mod(**inputs) 2025-09-07T08:23:41.8904753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8905113Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8905464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8905824Z layer_outputs = layer_module( 2025-09-07T08:23:41.8906159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8906516Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8906897Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8907275Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8907649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8908029Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8908393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8908780Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8908938Z 2025-09-07T08:23:41.8909024Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8909235Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8909435Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8909639Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8909841Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8910073Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8910416Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8910735Z return mod(**inputs) 2025-09-07T08:23:41.8911072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8911439Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8911786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8912147Z layer_outputs = layer_module( 2025-09-07T08:23:41.8912484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8912841Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8913199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8913559Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8913924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8914293Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8914658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8915060Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8915240Z 2025-09-07T08:23:41.8915318Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8915528Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8915774Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8916006Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8916347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8916674Z return mod(**inputs) 2025-09-07T08:23:41.8917019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8917393Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8917766Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8918134Z layer_outputs = layer_module( 2025-09-07T08:23:41.8918485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8918840Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8919200Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8919557Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8919919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8920301Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8920709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8921100Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8921256Z 2025-09-07T08:23:41.8921356Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8921710Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8922032Z return mod(**inputs) 2025-09-07T08:23:41.8922378Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8922754Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8923128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8923512Z layer_outputs = layer_module( 2025-09-07T08:23:41.8923867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8924241Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8924614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8925010Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8925394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8925784Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8926175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8926622Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8926808Z 2025-09-07T08:23:41.8926897Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8927208Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8927472Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8927856Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8928218Z return mod(**inputs) 2025-09-07T08:23:41.8928574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8929005Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8929373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8929776Z layer_outputs = layer_module( 2025-09-07T08:23:41.8930149Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8930524Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8930904Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8931281Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8931695Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8932089Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8932473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8932898Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8933081Z 2025-09-07T08:23:41.8933163Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8933384Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8933599Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8933841Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8934218Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8934556Z return mod(**inputs) 2025-09-07T08:23:41.8934912Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8935295Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8935665Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8936035Z layer_outputs = layer_module( 2025-09-07T08:23:41.8936387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8936759Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8937137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8937515Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8937895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8938275Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8938642Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8939040Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8939198Z 2025-09-07T08:23:41.8939301Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8939658Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8939984Z return mod(**inputs) 2025-09-07T08:23:41.8940330Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8940692Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8941057Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8941422Z layer_outputs = layer_module( 2025-09-07T08:23:41.8941767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8942127Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8942489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8942852Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8943207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8943602Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8943993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8944395Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8944565Z 2025-09-07T08:23:41.8944649Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8944868Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8945268Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8945480Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8945739Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8945991Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8946349Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8946667Z return mod(**inputs) 2025-09-07T08:23:41.8947013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8947391Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8947758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8948155Z layer_outputs = layer_module( 2025-09-07T08:23:41.8948499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8948849Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8949206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8949573Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8949934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8950306Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8950680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8951108Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8951302Z 2025-09-07T08:23:41.8951390Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8951600Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8951812Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8952046Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8952407Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8952725Z return mod(**inputs) 2025-09-07T08:23:41.8953075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8953456Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8953828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8954205Z layer_outputs = layer_module( 2025-09-07T08:23:41.8954552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8954924Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8955299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8955694Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8956059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8956423Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8956794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8957194Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8957388Z 2025-09-07T08:23:41.8957525Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8957897Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8958222Z return mod(**inputs) 2025-09-07T08:23:41.8958569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8958940Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8959318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8959675Z layer_outputs = layer_module( 2025-09-07T08:23:41.8960023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8960390Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8960767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8961149Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8961523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8961921Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8962302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8962710Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8962867Z 2025-09-07T08:23:41.8962947Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8963158Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8963364Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8963598Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8963945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8964272Z return mod(**inputs) 2025-09-07T08:23:41.8964614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8964981Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8965343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8965712Z layer_outputs = layer_module( 2025-09-07T08:23:41.8966071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8966438Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8966810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8967266Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8967677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8968100Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8968515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8968969Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8969153Z 2025-09-07T08:23:41.8969234Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8969450Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8969662Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8969903Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8970259Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8970577Z return mod(**inputs) 2025-09-07T08:23:41.8970927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8971320Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8971701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8972063Z layer_outputs = layer_module( 2025-09-07T08:23:41.8972411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8972772Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8973157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8973534Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8973908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8974340Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8974699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8975149Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8975318Z 2025-09-07T08:23:41.8975421Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8975798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8976125Z return mod(**inputs) 2025-09-07T08:23:41.8976463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8976838Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8977208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8977565Z layer_outputs = layer_module( 2025-09-07T08:23:41.8977892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8978240Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8978601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.8978965Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.8979325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.8979681Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.8980042Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8980425Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8980581Z 2025-09-07T08:23:41.8980666Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8980874Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8981071Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8981274Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8981506Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8981853Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8982163Z return mod(**inputs) 2025-09-07T08:23:41.8982506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8982867Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8983223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8983575Z layer_outputs = layer_module( 2025-09-07T08:23:41.8983910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8984266Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8984625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8985067Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8985435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8985816Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8986197Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.8986620Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.8986809Z 2025-09-07T08:23:41.8986895Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8987106Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8987315Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8987547Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8987898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8988214Z return mod(**inputs) 2025-09-07T08:23:41.8988551Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8988910Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8989277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8989635Z layer_outputs = layer_module( 2025-09-07T08:23:41.8989964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8990316Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8990674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8991037Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8991391Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8991761Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8992123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.8992511Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.8992667Z 2025-09-07T08:23:41.8992775Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8993114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8993432Z return mod(**inputs) 2025-09-07T08:23:41.8993763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.8994121Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.8994468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.8994831Z layer_outputs = layer_module( 2025-09-07T08:23:41.8995163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.8995514Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.8995880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.8996248Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.8996620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.8997001Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.8997363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.8997760Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.8997921Z 2025-09-07T08:23:41.8998022Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8998252Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8998461Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.8998699Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.8999050Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.8999377Z return mod(**inputs) 2025-09-07T08:23:41.8999723Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9000116Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9000485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9000847Z layer_outputs = layer_module( 2025-09-07T08:23:41.9001190Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9001555Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9001929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.9002299Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.9002692Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.9003079Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.9003476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.9003914Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.9004097Z 2025-09-07T08:23:41.9004179Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9004403Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9004619Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9004860Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9005223Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9005561Z return mod(**inputs) 2025-09-07T08:23:41.9005924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9006312Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9006689Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9007137Z layer_outputs = layer_module( 2025-09-07T08:23:41.9007511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9007885Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9008268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.9008653Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.9009030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.9009410Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.9009790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.9010190Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.9010351Z 2025-09-07T08:23:41.9010455Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9010815Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9011137Z return mod(**inputs) 2025-09-07T08:23:41.9011483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9011854Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9012243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9012612Z layer_outputs = layer_module( 2025-09-07T08:23:41.9012956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9013314Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9013709Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.9014098Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.9014471Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.9014848Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.9015221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.9015605Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.9015774Z 2025-09-07T08:23:41.9015855Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9016074Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9016304Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9016505Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9016743Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9017103Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9017434Z return mod(**inputs) 2025-09-07T08:23:41.9017770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9018127Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9018482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9018843Z layer_outputs = layer_module( 2025-09-07T08:23:41.9019178Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9019528Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9019892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.9020261Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.9020623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.9020991Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.9021348Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.9021751Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.9021929Z 2025-09-07T08:23:41.9022007Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9022217Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9022412Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9022637Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9022985Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9023299Z return mod(**inputs) 2025-09-07T08:23:41.9023637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9023997Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9024349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9024706Z layer_outputs = layer_module( 2025-09-07T08:23:41.9025039Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9025404Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9025783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.9026150Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.9026515Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.9026881Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.9027319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.9027710Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.9027876Z 2025-09-07T08:23:41.9027979Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9028332Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9028652Z return mod(**inputs) 2025-09-07T08:23:41.9028988Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9029356Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9029720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9030086Z layer_outputs = layer_module( 2025-09-07T08:23:41.9030423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9030782Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9031146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 681, in forward 2025-09-07T08:23:41.9031519Z self_attention_outputs = self.layer[0]( 2025-09-07T08:23:41.9031883Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 599, in forward 2025-09-07T08:23:41.9032245Z attention_output = self.SelfAttention( 2025-09-07T08:23:41.9032609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.9033019Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.9033187Z 2025-09-07T08:23:41.9033277Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9033487Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9033705Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9033956Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9034307Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9034635Z return mod(**inputs) 2025-09-07T08:23:41.9034960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9035315Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9035663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9036026Z layer_outputs = layer_module( 2025-09-07T08:23:41.9036357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9036710Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9037073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.9037449Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.9037818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.9038175Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.9038541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 526, in forward 2025-09-07T08:23:41.9038999Z scores = torch.matmul(query_states, key_states.transpose(3, 2)) 2025-09-07T08:23:41.9039173Z 2025-09-07T08:23:41.9039258Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9039471Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9039676Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9039912Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9040272Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9040599Z return mod(**inputs) 2025-09-07T08:23:41.9040952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9041332Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9041701Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9042082Z layer_outputs = layer_module( 2025-09-07T08:23:41.9042430Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9042802Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9043198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.9043583Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.9043968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.9044345Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.9044727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 565, in forward 2025-09-07T08:23:41.9045278Z attn_output = torch.matmul(attn_weights, value_states) 2025-09-07T08:23:41.9045460Z 2025-09-07T08:23:41.9045582Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9045804Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9045888Z return mod(**inputs) 2025-09-07T08:23:41.9046145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1762, in forward 2025-09-07T08:23:41.9046229Z decoder_outputs = self.decoder( 2025-09-07T08:23:41.9046495Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1092, in forward 2025-09-07T08:23:41.9046576Z layer_outputs = layer_module( 2025-09-07T08:23:41.9046828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:23:41.9046916Z return super().__call__(*args, **kwargs) 2025-09-07T08:23:41.9047213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 705, in forward 2025-09-07T08:23:41.9047319Z cross_attention_outputs = self.layer[1]( 2025-09-07T08:23:41.9047574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 635, in forward 2025-09-07T08:23:41.9047674Z attention_output = self.EncDecAttention( 2025-09-07T08:23:41.9047922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 567, in forward 2025-09-07T08:23:41.9048050Z attn_output = attn_output.transpose(1, 2).contiguous() 2025-09-07T08:23:41.9048054Z 2025-09-07T08:23:41.9048140Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9048227Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9048320Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9048404Z cudagraph partition due to non gpu ops 2025-09-07T08:23:41.9048525Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:23:41.9048742Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:23:41.9048816Z return mod(**inputs) 2025-09-07T08:23:41.9049146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/t5/modeling_t5.py", line 1798, in forward 2025-09-07T08:23:41.9049296Z loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) 2025-09-07T08:23:41.9049300Z 2025-09-07T08:23:54.1345892Z Compilation time (from dynamo_timed): 40.442446879 2025-09-07T08:23:54.1466173Z pass 2025-09-07T08:23:54.1466766Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:23:54.1468230Z TIMING: _recursive_pre_grad_passes:0.05941 _recursive_joint_graph_passes:0.58385 _recursive_post_grad_passes:0.20514 linear_unary_template_precompiling:0.0177 bmm_template_precompiling:0.00327 async_compile.wait:0.00697 code_gen:12.41501 inductor_compile:30.62548 backend_compile:37.56836 gc:0.00026 entire_frame_compile:40.44245 total_wall_time:40.44245 2025-09-07T08:23:54.1469434Z STATS: call_* op count: 824 | FakeTensorMode.__torch_dispatch__:38720 | FakeTensor.__torch_dispatch__:4678 | ProxyTorchDispatchMode.__torch_dispatch__:10688 2025-09-07T08:23:54.1470002Z Dynamo produced 1 graphs covering 824 ops with 0 graph breaks (0 unique) 2025-09-07T08:23:57.0536871Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:23:57.0537978Z import pynvml # type: ignore[import] 2025-09-07T08:23:59.7777967Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:23:59.7780796Z from pkg_resources import resource_filename 2025-09-07T08:24:00.4406343Z 2025-09-07T08:24:02.8553445Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:24:02.8557081Z loading model: 0it [00:02, ?it/s] 2025-09-07T08:24:02.8557432Z cpu eval TrOCRForCausalLM 2025-09-07T08:24:03.0103908Z WARNING:common:fp64 golden ref were not generated for TrOCRForCausalLM. Setting accuracy check to cosine 2025-09-07T08:24:03.0532329Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:24:03.3067061Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:24:03.5323857Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:24:22.4889588Z Autotune Choices Stats: 2025-09-07T08:24:22.4890218Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.06613499999730266} 2025-09-07T08:24:22.4903282Z AUTOTUNE linear_unary(256x1024, 1024x1024, 1024) 2025-09-07T08:24:22.4903646Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:24:22.4904006Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:24:22.4904385Z cpp_CppMicroGemmAMX_0 0.0661 ms 100.0% 2025-09-07T08:24:22.4904694Z _linear_pointwise 0.0940 ms 70.3% 2025-09-07T08:24:22.4905242Z SingleProcess AUTOTUNE benchmarking takes 0.2660 seconds and 1.4315 seconds precompiling for 2 choices 2025-09-07T08:24:24.6793174Z Autotune Choices Stats: 2025-09-07T08:24:24.6793656Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.19295099991722964} 2025-09-07T08:24:24.6805493Z AUTOTUNE linear_unary(256x1024, 4096x1024, 4096) 2025-09-07T08:24:24.6805782Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:24:24.6806051Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:24:24.6806350Z cpp_CppMicroGemmAMX_4 0.1930 ms 100.0% 2025-09-07T08:24:24.6806602Z _linear_pointwise 0.1996 ms 96.7% 2025-09-07T08:24:24.6807001Z SingleProcess AUTOTUNE benchmarking takes 0.2860 seconds and 1.5275 seconds precompiling for 2 choices 2025-09-07T08:24:26.4196431Z Autotune Choices Stats: 2025-09-07T08:24:26.4197008Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_5", "best_time": 0.117492000299535} 2025-09-07T08:24:26.4211638Z AUTOTUNE linear_unary(256x4096, 1024x4096, 1024) 2025-09-07T08:24:26.4212206Z strides: [4096, 1], [1, 0], [1] 2025-09-07T08:24:26.4212729Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:24:26.4213182Z cpp_CppMicroGemmAMX_5 0.1175 ms 100.0% 2025-09-07T08:24:26.4213745Z _linear_pointwise 0.2027 ms 58.0% 2025-09-07T08:24:26.4214321Z SingleProcess AUTOTUNE benchmarking takes 0.2847 seconds and 1.3716 seconds precompiling for 2 choices 2025-09-07T08:24:33.7444638Z Autotune Choices Stats: 2025-09-07T08:24:33.7445422Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_72", "best_time": 1.887466999960452} 2025-09-07T08:24:33.7463631Z AUTOTUNE linear_unary(256x1024, 50265x1024) 2025-09-07T08:24:33.7464014Z strides: [1024, 1], [1, 0] 2025-09-07T08:24:33.7464302Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:24:33.7464627Z cpp_CppMicroGemmAMX_72 1.8875 ms 100.0% 2025-09-07T08:24:33.7464941Z _linear_pointwise 2.2256 ms 84.8% 2025-09-07T08:24:33.7465739Z SingleProcess AUTOTUNE benchmarking takes 0.5989 seconds and 1.3916 seconds precompiling for 2 choices 2025-09-07T08:24:34.3125674Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3126165Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3127507Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3128016Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3128369Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3128699Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3129038Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3129364Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3129679Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3130004Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3130342Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3130666Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3130942Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3131166Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3131388Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3131619Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3131845Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3132078Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3132303Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3132525Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3132748Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3132962Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3133326Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3133582Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3133809Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3134015Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3134228Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3134437Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3134667Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3134872Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3135089Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3135297Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3135511Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3135725Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3135942Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3136164Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3136389Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3136608Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3136822Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3137389Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3137669Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3137886Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3138093Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3138312Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3138532Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3138756Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3138960Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3139174Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3139428Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3139639Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3139842Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3140052Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3140267Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3140475Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3140678Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3140890Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3141103Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3141315Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3141520Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3141775Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3141989Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3142214Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3142424Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3142627Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3142836Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3143047Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3143257Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3143460Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3143671Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3143885Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3144098Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3144302Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3144512Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3144721Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3144933Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3145350Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3145573Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3145788Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3146005Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3146215Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3146432Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3146645Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3146862Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3147084Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3147347Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3147575Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3147787Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3147998Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3148210Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3148416Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3148631Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3148842Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3149053Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3149267Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3149483Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3149696Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3149907Z cudagraph partition due to non gpu ops 2025-09-07T08:24:34.3150156Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:24:34.3150562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:24:34.3150997Z return mod(**inputs) 2025-09-07T08:24:34.3151416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/trocr/modeling_trocr.py", line 844, in forward 2025-09-07T08:24:34.3151908Z loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) 2025-09-07T08:24:34.3152112Z 2025-09-07T08:24:42.6255381Z Compilation time (from dynamo_timed): 37.806718783 2025-09-07T08:24:42.6258246Z pass 2025-09-07T08:24:42.6258924Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:24:42.6259893Z TIMING: _recursive_pre_grad_passes:0.04148 _recursive_joint_graph_passes:0.49803 _recursive_post_grad_passes:0.071 linear_unary_template_precompiling:5.73389 linear_unary_template_autotuning:1.42537 async_compile.wait:0.80596 code_gen:7.9545 inductor_compile:30.05645 backend_compile:35.45313 gc:0.00128 entire_frame_compile:37.80672 total_wall_time:37.80672 2025-09-07T08:24:42.6263415Z STATS: call_* op count: 445 | FakeTensorMode.__torch_dispatch__:28871 | FakeTensor.__torch_dispatch__:2955 | ProxyTorchDispatchMode.__torch_dispatch__:8203 2025-09-07T08:24:42.6264037Z Dynamo produced 1 graphs covering 445 ops with 0 graph breaks (0 unique) 2025-09-07T08:24:45.5052892Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:24:45.5053931Z import pynvml # type: ignore[import] 2025-09-07T08:24:48.1505092Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:24:48.1506102Z from pkg_resources import resource_filename 2025-09-07T08:24:48.7905972Z 2025-09-07T08:24:55.1239428Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:24:55.1241705Z loading model: 0it [00:06, ?it/s] 2025-09-07T08:24:55.1242024Z cpu eval XGLMForCausalLM 2025-09-07T08:24:55.5017906Z WARNING:common:fp64 golden ref were not generated for XGLMForCausalLM. Setting accuracy check to cosine 2025-09-07T08:24:55.5855478Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:24:56.1592070Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:24:56.6421104Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:25:24.7827777Z Autotune Choices Stats: 2025-09-07T08:25:24.7828452Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_3", "best_time": 0.007263000043167267} 2025-09-07T08:25:24.7837210Z AUTOTUNE bmm(16x128x128, 16x128x64) 2025-09-07T08:25:24.7837526Z strides: [16384, 128, 1], [64, 1024, 1] 2025-09-07T08:25:24.7838159Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:25:24.7838505Z cpp_CppMicroGemmAMX_3 0.0073 ms 100.0% 2025-09-07T08:25:24.7838813Z bmm 0.6741 ms 1.1% 2025-09-07T08:25:24.7839341Z SingleProcess AUTOTUNE benchmarking takes 0.2669 seconds and 1.4153 seconds precompiling for 2 choices 2025-09-07T08:25:41.9767086Z Autotune Choices Stats: 2025-09-07T08:25:41.9767876Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_168", "best_time": 6.596431000161829} 2025-09-07T08:25:41.9786925Z AUTOTUNE linear_unary(128x1024, 256008x1024) 2025-09-07T08:25:41.9787230Z strides: [1024, 1], [1, 0] 2025-09-07T08:25:41.9787458Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:25:41.9790311Z cpp_CppMicroGemmAMX_168 6.5964 ms 100.0% 2025-09-07T08:25:41.9790569Z _linear_pointwise 11.7220 ms 56.3% 2025-09-07T08:25:41.9790951Z SingleProcess AUTOTUNE benchmarking takes 1.4777 seconds and 1.4311 seconds precompiling for 2 choices 2025-09-07T08:25:44.5598292Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5598705Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5599141Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5599505Z return mod(**inputs) 2025-09-07T08:25:44.5599953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5600755Z outputs = self.model( 2025-09-07T08:25:44.5601181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5601629Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5602045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5602472Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5602928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5603398Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5603939Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5604452Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5604710Z 2025-09-07T08:25:44.5604818Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5605057Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5605302Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5605575Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5605992Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5606365Z return mod(**inputs) 2025-09-07T08:25:44.5606784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5607486Z outputs = self.model( 2025-09-07T08:25:44.5607923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5608381Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5608775Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5609159Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5609563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5610001Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5610452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5610911Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5611085Z 2025-09-07T08:25:44.5611204Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5611602Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5611987Z return mod(**inputs) 2025-09-07T08:25:44.5612393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5612833Z outputs = self.model( 2025-09-07T08:25:44.5613267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5613713Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5614120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5614583Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5615122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5615590Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5616052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5616579Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5616796Z 2025-09-07T08:25:44.5616895Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5617172Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5617418Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5617648Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5617917Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5618326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5618704Z return mod(**inputs) 2025-09-07T08:25:44.5619120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5619542Z outputs = self.model( 2025-09-07T08:25:44.5619971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5620422Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5620838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5621259Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5621699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5622167Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5622640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5623150Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5623367Z 2025-09-07T08:25:44.5623461Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5623706Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5623948Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5624219Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5624626Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5625023Z return mod(**inputs) 2025-09-07T08:25:44.5625443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5625873Z outputs = self.model( 2025-09-07T08:25:44.5626293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5626735Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5627151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5627565Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5628004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5628445Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5628875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5629311Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5629482Z 2025-09-07T08:25:44.5629597Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5630002Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5630358Z return mod(**inputs) 2025-09-07T08:25:44.5630794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5631218Z outputs = self.model( 2025-09-07T08:25:44.5631616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5632042Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5632414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5632826Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5633240Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5633672Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5634100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5634555Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5634752Z 2025-09-07T08:25:44.5634839Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5635068Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5635310Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5635541Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5635835Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5636252Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5636612Z return mod(**inputs) 2025-09-07T08:25:44.5637002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5637429Z outputs = self.model( 2025-09-07T08:25:44.5637801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5638202Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5638559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5638948Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5639364Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5639839Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5640279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5640746Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5640957Z 2025-09-07T08:25:44.5641043Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5641274Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5641502Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5641899Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5642281Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5642642Z return mod(**inputs) 2025-09-07T08:25:44.5643044Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5643468Z outputs = self.model( 2025-09-07T08:25:44.5643860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5644289Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5644684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5645314Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5645768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5646291Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5646735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5647298Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5647476Z 2025-09-07T08:25:44.5647604Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5648006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5648411Z return mod(**inputs) 2025-09-07T08:25:44.5648801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5649211Z outputs = self.model( 2025-09-07T08:25:44.5649594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5649999Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5650381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5650773Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5651218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5651658Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5652084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5652552Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5652749Z 2025-09-07T08:25:44.5652837Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5653073Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5653293Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5653548Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5653939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5654290Z return mod(**inputs) 2025-09-07T08:25:44.5654680Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5655078Z outputs = self.model( 2025-09-07T08:25:44.5655467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5655876Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5656256Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5656646Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5657058Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5657496Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5657933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5658401Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5658605Z 2025-09-07T08:25:44.5658692Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5658921Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5659146Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5659400Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5659779Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5660131Z return mod(**inputs) 2025-09-07T08:25:44.5660513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5660917Z outputs = self.model( 2025-09-07T08:25:44.5661414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5661820Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5662203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5662597Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5663005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5663453Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5663895Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5664333Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5664495Z 2025-09-07T08:25:44.5664616Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5665009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5665358Z return mod(**inputs) 2025-09-07T08:25:44.5665759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5666169Z outputs = self.model( 2025-09-07T08:25:44.5666546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5666935Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5667285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5667655Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5668045Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5668458Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5668863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5669303Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5669489Z 2025-09-07T08:25:44.5669574Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5669794Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5670012Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5670230Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5670484Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5670871Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5671221Z return mod(**inputs) 2025-09-07T08:25:44.5671598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5672011Z outputs = self.model( 2025-09-07T08:25:44.5672375Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5672777Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5673164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5673527Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5673921Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5674335Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5674742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5675186Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5675377Z 2025-09-07T08:25:44.5675460Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5675725Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5675949Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5676200Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5676631Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5676986Z return mod(**inputs) 2025-09-07T08:25:44.5677374Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5677797Z outputs = self.model( 2025-09-07T08:25:44.5678175Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5678591Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5678972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5679352Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5679753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5680182Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5680639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5681076Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5681241Z 2025-09-07T08:25:44.5681363Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5681755Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5682117Z return mod(**inputs) 2025-09-07T08:25:44.5682521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5682946Z outputs = self.model( 2025-09-07T08:25:44.5683354Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5683785Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5684183Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5684589Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5685024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5685481Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5685930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5686418Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5686624Z 2025-09-07T08:25:44.5686715Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5686957Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5687305Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5687574Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5687983Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5688352Z return mod(**inputs) 2025-09-07T08:25:44.5688751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5689168Z outputs = self.model( 2025-09-07T08:25:44.5689557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5689970Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5690350Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5690737Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5691194Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5691635Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5692068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5692538Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5692739Z 2025-09-07T08:25:44.5692827Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5693070Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5693297Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5693551Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5693939Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5694284Z return mod(**inputs) 2025-09-07T08:25:44.5694667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5695072Z outputs = self.model( 2025-09-07T08:25:44.5696379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5696816Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5697198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5697595Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5697976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5698380Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5698782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5699193Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5699361Z 2025-09-07T08:25:44.5699467Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5699831Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5700168Z return mod(**inputs) 2025-09-07T08:25:44.5700553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5700956Z outputs = self.model( 2025-09-07T08:25:44.5701323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5701712Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5702066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5702439Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5702834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5703249Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5703662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5704098Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5704282Z 2025-09-07T08:25:44.5704363Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5704596Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5704807Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5705008Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5705243Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5705599Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5705931Z return mod(**inputs) 2025-09-07T08:25:44.5706345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5706763Z outputs = self.model( 2025-09-07T08:25:44.5707156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5707616Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5707978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5708365Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5708752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5709165Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5709578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5710028Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5710220Z 2025-09-07T08:25:44.5710306Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5710523Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5710770Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5711012Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5711362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5711741Z return mod(**inputs) 2025-09-07T08:25:44.5712095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5712468Z outputs = self.model( 2025-09-07T08:25:44.5712818Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5713186Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5713536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5713892Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5714276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5714670Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5715079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5715488Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5715641Z 2025-09-07T08:25:44.5715757Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5716119Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5716445Z return mod(**inputs) 2025-09-07T08:25:44.5716808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5717193Z outputs = self.model( 2025-09-07T08:25:44.5717555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5717943Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5718297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5718670Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5719064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5719478Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5719877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5720334Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5720544Z 2025-09-07T08:25:44.5720627Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5720848Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5721064Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5721297Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5721666Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5721997Z return mod(**inputs) 2025-09-07T08:25:44.5722379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5722758Z outputs = self.model( 2025-09-07T08:25:44.5723119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5723507Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5723865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5724234Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5724635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5725049Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5725456Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5725903Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5726098Z 2025-09-07T08:25:44.5726191Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5726414Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5726641Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5726895Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5727371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5727706Z return mod(**inputs) 2025-09-07T08:25:44.5728088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5728496Z outputs = self.model( 2025-09-07T08:25:44.5728889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5729275Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5729637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5730021Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5730439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5730868Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5731277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5731692Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5731852Z 2025-09-07T08:25:44.5731968Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5732328Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5732656Z return mod(**inputs) 2025-09-07T08:25:44.5733007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5733386Z outputs = self.model( 2025-09-07T08:25:44.5733743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5734127Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5734474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5734872Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5735252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5735658Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5736053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5736486Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5736671Z 2025-09-07T08:25:44.5736754Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5736970Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5737183Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5737384Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5737624Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5737987Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5738313Z return mod(**inputs) 2025-09-07T08:25:44.5738686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5739056Z outputs = self.model( 2025-09-07T08:25:44.5739415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5739795Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5740144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5740496Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5740877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5741279Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5741677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5742108Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5742293Z 2025-09-07T08:25:44.5742376Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5742587Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5742799Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5743032Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5743382Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5743710Z return mod(**inputs) 2025-09-07T08:25:44.5744062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5744433Z outputs = self.model( 2025-09-07T08:25:44.5744789Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5745310Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5745669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5746039Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5746414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5746807Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5747212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5747620Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5747772Z 2025-09-07T08:25:44.5747884Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5748243Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5748641Z return mod(**inputs) 2025-09-07T08:25:44.5748998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5749367Z outputs = self.model( 2025-09-07T08:25:44.5749713Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5750088Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5750463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5750829Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5751212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5751610Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5752009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5752427Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5752605Z 2025-09-07T08:25:44.5752712Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5752928Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5753138Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5753364Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5753720Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5754048Z return mod(**inputs) 2025-09-07T08:25:44.5754410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5754789Z outputs = self.model( 2025-09-07T08:25:44.5755151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5755545Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5755906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5756258Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5756624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5757018Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5757410Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5757835Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5758017Z 2025-09-07T08:25:44.5758104Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5758306Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5758517Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5758746Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5759094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5759406Z return mod(**inputs) 2025-09-07T08:25:44.5759755Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5760121Z outputs = self.model( 2025-09-07T08:25:44.5760477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5760855Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5761212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5761582Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5761978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5762426Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5762831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5763229Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5763385Z 2025-09-07T08:25:44.5763487Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5763870Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5764205Z return mod(**inputs) 2025-09-07T08:25:44.5764563Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5764952Z outputs = self.model( 2025-09-07T08:25:44.5765337Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5765761Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5766132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5766545Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5766970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5767507Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5767956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5768425Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5768628Z 2025-09-07T08:25:44.5768716Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5768951Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5769201Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5769410Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5769647Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5770001Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5770327Z return mod(**inputs) 2025-09-07T08:25:44.5770681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5771049Z outputs = self.model( 2025-09-07T08:25:44.5771452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5771843Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5772202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5772564Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5772958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5773375Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5773786Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5774219Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5774404Z 2025-09-07T08:25:44.5774484Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5774697Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5774909Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5775142Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5775491Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5775814Z return mod(**inputs) 2025-09-07T08:25:44.5776167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5776589Z outputs = self.model( 2025-09-07T08:25:44.5776935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5788350Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5788777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5789157Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5789637Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5790074Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5790508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5790929Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5791093Z 2025-09-07T08:25:44.5791213Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5791572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5791904Z return mod(**inputs) 2025-09-07T08:25:44.5792300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5792677Z outputs = self.model( 2025-09-07T08:25:44.5793027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5793401Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5793749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5794112Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5794486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5794881Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5795278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5795704Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5795881Z 2025-09-07T08:25:44.5795976Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5796199Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5796409Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5796655Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5797022Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5797360Z return mod(**inputs) 2025-09-07T08:25:44.5797716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5798101Z outputs = self.model( 2025-09-07T08:25:44.5798464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5798853Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5799212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5799574Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5799971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5800401Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5800801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5801232Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5801497Z 2025-09-07T08:25:44.5801581Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5801810Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5802034Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5802288Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5802672Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5803030Z return mod(**inputs) 2025-09-07T08:25:44.5803438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5803838Z outputs = self.model( 2025-09-07T08:25:44.5804195Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5804585Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5804948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5805333Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5805733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5806191Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5806607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5807021Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5807292Z 2025-09-07T08:25:44.5807413Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5807807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5808170Z return mod(**inputs) 2025-09-07T08:25:44.5808553Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5808932Z outputs = self.model( 2025-09-07T08:25:44.5809289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5809662Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5810018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5810387Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5810783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5811199Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5811603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5812048Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5812237Z 2025-09-07T08:25:44.5812333Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5812547Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5812749Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5812958Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5813194Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5813550Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5813873Z return mod(**inputs) 2025-09-07T08:25:44.5814225Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5814600Z outputs = self.model( 2025-09-07T08:25:44.5814953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5815334Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5815686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5816097Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5816497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5816915Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5817326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5817783Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5817989Z 2025-09-07T08:25:44.5818072Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5818291Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5818508Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5818744Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5819112Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5819447Z return mod(**inputs) 2025-09-07T08:25:44.5819813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5820200Z outputs = self.model( 2025-09-07T08:25:44.5820580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5820972Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5821329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5821702Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5822089Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5822506Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5822923Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5823339Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5823494Z 2025-09-07T08:25:44.5823610Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5823974Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5824317Z return mod(**inputs) 2025-09-07T08:25:44.5824688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5825079Z outputs = self.model( 2025-09-07T08:25:44.5825445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5825826Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5826188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5826569Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5826964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5827381Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5827794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5828242Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5828426Z 2025-09-07T08:25:44.5828531Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5828750Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5828955Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5829206Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5829556Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5829927Z return mod(**inputs) 2025-09-07T08:25:44.5830281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5830662Z outputs = self.model( 2025-09-07T08:25:44.5831028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5831405Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5831790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5832156Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5832545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5832936Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5833320Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5833760Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5833949Z 2025-09-07T08:25:44.5834029Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5834260Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5834468Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5834706Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5835069Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5835394Z return mod(**inputs) 2025-09-07T08:25:44.5835746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5836124Z outputs = self.model( 2025-09-07T08:25:44.5836483Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5836878Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5837233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5837599Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5837998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5838412Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5838805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5839196Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5839340Z 2025-09-07T08:25:44.5839441Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5839796Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5840125Z return mod(**inputs) 2025-09-07T08:25:44.5840492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5840873Z outputs = self.model( 2025-09-07T08:25:44.5841243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5841684Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5842064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5842461Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5842872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5843319Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5843753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5844262Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5844455Z 2025-09-07T08:25:44.5844553Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5844782Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5845166Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5845417Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5845675Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5846128Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5846484Z return mod(**inputs) 2025-09-07T08:25:44.5846870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5847369Z outputs = self.model( 2025-09-07T08:25:44.5847764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5848195Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5848569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5849000Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5849424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5849856Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5850301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5850775Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5850981Z 2025-09-07T08:25:44.5851076Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5851309Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5851529Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5851789Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5852181Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5852535Z return mod(**inputs) 2025-09-07T08:25:44.5852918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5853333Z outputs = self.model( 2025-09-07T08:25:44.5853728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5854143Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5854516Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5854923Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5855300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5855705Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5856107Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5856503Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5856662Z 2025-09-07T08:25:44.5856766Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5857127Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5857463Z return mod(**inputs) 2025-09-07T08:25:44.5857811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5858169Z outputs = self.model( 2025-09-07T08:25:44.5858518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5858948Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5859303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5859648Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5860035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5860450Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5860882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5861301Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5861474Z 2025-09-07T08:25:44.5861554Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5861763Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5861968Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5862201Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5862540Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5862857Z return mod(**inputs) 2025-09-07T08:25:44.5863220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5863585Z outputs = self.model( 2025-09-07T08:25:44.5863935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5864300Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5864649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5865012Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5865395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5865804Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5866198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5866634Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5866836Z 2025-09-07T08:25:44.5866913Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5867119Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5867315Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5867549Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5867898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5868219Z return mod(**inputs) 2025-09-07T08:25:44.5868570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5868933Z outputs = self.model( 2025-09-07T08:25:44.5869285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5869656Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5869999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5870343Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5870719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5871112Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5871502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5871893Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5872041Z 2025-09-07T08:25:44.5872141Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5872529Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5872856Z return mod(**inputs) 2025-09-07T08:25:44.5873218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5873595Z outputs = self.model( 2025-09-07T08:25:44.5873948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5874341Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5874696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5875048Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5875417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5875820Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5876222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5876668Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5876848Z 2025-09-07T08:25:44.5876936Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5877146Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5877359Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5877570Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5877807Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5878155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5878479Z return mod(**inputs) 2025-09-07T08:25:44.5878835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5879209Z outputs = self.model( 2025-09-07T08:25:44.5879562Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5879936Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5880281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5880640Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5881023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5881417Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5881817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5882257Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5882450Z 2025-09-07T08:25:44.5882544Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5882762Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5882970Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5883214Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5883583Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5883915Z return mod(**inputs) 2025-09-07T08:25:44.5884275Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5884660Z outputs = self.model( 2025-09-07T08:25:44.5885023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5885414Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5885770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5886172Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5886561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5886982Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5887545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5887987Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5888167Z 2025-09-07T08:25:44.5888310Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5888692Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5889034Z return mod(**inputs) 2025-09-07T08:25:44.5889428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5889805Z outputs = self.model( 2025-09-07T08:25:44.5890164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5890544Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5890906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5891270Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5891654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5892055Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5892446Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5892877Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5893061Z 2025-09-07T08:25:44.5893145Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5893360Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5893571Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5893799Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5894159Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5894492Z return mod(**inputs) 2025-09-07T08:25:44.5894840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5895196Z outputs = self.model( 2025-09-07T08:25:44.5895543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5895912Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5896248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5896597Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5896971Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5897359Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5897745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5898168Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5898346Z 2025-09-07T08:25:44.5898433Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5898633Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5898840Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5899073Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5899419Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5899727Z return mod(**inputs) 2025-09-07T08:25:44.5900110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5900478Z outputs = self.model( 2025-09-07T08:25:44.5900825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5901188Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5901530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5901912Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5902302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5902693Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5903074Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5903461Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5903615Z 2025-09-07T08:25:44.5903715Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5904075Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5904395Z return mod(**inputs) 2025-09-07T08:25:44.5904732Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5905097Z outputs = self.model( 2025-09-07T08:25:44.5905442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5905814Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5906148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5906499Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5906873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5907268Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5907663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5908078Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5908259Z 2025-09-07T08:25:44.5908338Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5908549Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5908756Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5908951Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5909182Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5909535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5909861Z return mod(**inputs) 2025-09-07T08:25:44.5910222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5910591Z outputs = self.model( 2025-09-07T08:25:44.5910954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5911337Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5911679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5912025Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5912399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5912790Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5913182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5913667Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5913853Z 2025-09-07T08:25:44.5913934Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5914148Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5914357Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5914678Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5915035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5915372Z return mod(**inputs) 2025-09-07T08:25:44.5915737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5916120Z outputs = self.model( 2025-09-07T08:25:44.5916480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5916861Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5917227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5917593Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5918001Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5918397Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5918796Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5919197Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5919346Z 2025-09-07T08:25:44.5919458Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5919816Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5920136Z return mod(**inputs) 2025-09-07T08:25:44.5920497Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5920871Z outputs = self.model( 2025-09-07T08:25:44.5921227Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5921608Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5921948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5922311Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5922696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5923101Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5923492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5923925Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5924110Z 2025-09-07T08:25:44.5924191Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5924407Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5924617Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5924846Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5925201Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5925524Z return mod(**inputs) 2025-09-07T08:25:44.5925878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5926254Z outputs = self.model( 2025-09-07T08:25:44.5926620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5927006Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5927557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5927963Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5928383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5928807Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5929232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5929673Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5929859Z 2025-09-07T08:25:44.5929947Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5930153Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5930362Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5930601Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5930973Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5931302Z return mod(**inputs) 2025-09-07T08:25:44.5931730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5932117Z outputs = self.model( 2025-09-07T08:25:44.5932488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5932860Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5933211Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5933566Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5933941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5934349Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5934756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5935169Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5935330Z 2025-09-07T08:25:44.5935438Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5935803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5936137Z return mod(**inputs) 2025-09-07T08:25:44.5936476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5936849Z outputs = self.model( 2025-09-07T08:25:44.5937203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5937583Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5937924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5938287Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5938672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5939078Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5939480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5939929Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5940112Z 2025-09-07T08:25:44.5940195Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5940410Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5940623Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5940823Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5941066Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5941458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5941781Z return mod(**inputs) 2025-09-07T08:25:44.5942137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5942515Z outputs = self.model( 2025-09-07T08:25:44.5942887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5943283Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5943634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5943985Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5944372Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5944768Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5945286Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5945761Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5945948Z 2025-09-07T08:25:44.5946028Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5946240Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5946445Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5946680Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5947020Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5947337Z return mod(**inputs) 2025-09-07T08:25:44.5947685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5948051Z outputs = self.model( 2025-09-07T08:25:44.5948404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5948767Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5949114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5949474Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5949850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5950242Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5950629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5951019Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5951173Z 2025-09-07T08:25:44.5951273Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5951625Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5951933Z return mod(**inputs) 2025-09-07T08:25:44.5952279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5952650Z outputs = self.model( 2025-09-07T08:25:44.5953004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5953381Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5953721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5954084Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5954469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5954862Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5955306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5955729Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5955913Z 2025-09-07T08:25:44.5955998Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5956212Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5956422Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5956675Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5957035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5957359Z return mod(**inputs) 2025-09-07T08:25:44.5957714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5958076Z outputs = self.model( 2025-09-07T08:25:44.5958433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5958812Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5959179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5959541Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5959916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5960322Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5960736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5961178Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5961367Z 2025-09-07T08:25:44.5961455Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5961677Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5961906Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5962163Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5962574Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5962930Z return mod(**inputs) 2025-09-07T08:25:44.5963326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5963729Z outputs = self.model( 2025-09-07T08:25:44.5964092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5964475Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5964825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5965195Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5965620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5966075Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5966509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5966960Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5967217Z 2025-09-07T08:25:44.5967341Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5967716Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5968083Z return mod(**inputs) 2025-09-07T08:25:44.5968472Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5968882Z outputs = self.model( 2025-09-07T08:25:44.5969242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5969674Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5970030Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5970377Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5970758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5971182Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5971573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5971986Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5972166Z 2025-09-07T08:25:44.5972245Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5972455Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5972665Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5972867Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5973090Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5973452Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5973770Z return mod(**inputs) 2025-09-07T08:25:44.5974115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5974471Z outputs = self.model( 2025-09-07T08:25:44.5974823Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5975201Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5975548Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5975908Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5976285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5976696Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5977087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5977508Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5977687Z 2025-09-07T08:25:44.5977767Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5977976Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5978181Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5978413Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5978762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5979073Z return mod(**inputs) 2025-09-07T08:25:44.5979420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5979787Z outputs = self.model( 2025-09-07T08:25:44.5980130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5980203Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5980415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5980503Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5980745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5980853Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5981092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5981233Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5981314Z 2025-09-07T08:25:44.5981415Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5981606Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5981681Z return mod(**inputs) 2025-09-07T08:25:44.5981916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5981992Z outputs = self.model( 2025-09-07T08:25:44.5982245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5982321Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5982544Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5982624Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5982869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5982965Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5983234Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5983364Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5983368Z 2025-09-07T08:25:44.5983449Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5983539Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5983616Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5983726Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5983923Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5983990Z return mod(**inputs) 2025-09-07T08:25:44.5984248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5984319Z outputs = self.model( 2025-09-07T08:25:44.5984559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5984631Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5984841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5984927Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5985162Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5985265Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5985501Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 197, in forward 2025-09-07T08:25:44.5985628Z attn_weights = torch.bmm(query_states, key_states.transpose(1, 2)) 2025-09-07T08:25:44.5985641Z 2025-09-07T08:25:44.5985718Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5985793Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5985876Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5985978Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5986176Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5986241Z return mod(**inputs) 2025-09-07T08:25:44.5986477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5986552Z outputs = self.model( 2025-09-07T08:25:44.5986787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5986866Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5987076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5987190Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5987433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5987527Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5987767Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 243, in forward 2025-09-07T08:25:44.5987874Z attn_output = torch.bmm(attn_probs, value_states) 2025-09-07T08:25:44.5987878Z 2025-09-07T08:25:44.5987977Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5988172Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5988236Z return mod(**inputs) 2025-09-07T08:25:44.5988475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 664, in forward 2025-09-07T08:25:44.5988544Z outputs = self.model( 2025-09-07T08:25:44.5988790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 552, in forward 2025-09-07T08:25:44.5988880Z layer_outputs = decoder_layer( 2025-09-07T08:25:44.5989094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:25:44.5989178Z return super().__call__(*args, **kwargs) 2025-09-07T08:25:44.5989413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 330, in forward 2025-09-07T08:25:44.5989514Z hidden_states, self_attn_weights = self.self_attn( 2025-09-07T08:25:44.5989745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 256, in forward 2025-09-07T08:25:44.5989865Z attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim) 2025-09-07T08:25:44.5989871Z 2025-09-07T08:25:44.5989956Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5990032Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5990115Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5990189Z cudagraph partition due to non gpu ops 2025-09-07T08:25:44.5990290Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:25:44.5990489Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:25:44.5990554Z return mod(**inputs) 2025-09-07T08:25:44.5990798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xglm/modeling_xglm.py", line 685, in forward 2025-09-07T08:25:44.5990870Z loss = self.loss_function( 2025-09-07T08:25:44.5991104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 67, in ForCausalLMLoss 2025-09-07T08:25:44.5991282Z loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs) 2025-09-07T08:25:44.5991532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/loss/loss_utils.py", line 36, in fixed_cross_entropy 2025-09-07T08:25:44.5991737Z loss = nn.functional.cross_entropy(source, target, ignore_index=ignore_index, reduction=reduction) 2025-09-07T08:25:44.5991741Z 2025-09-07T08:26:04.3472380Z Compilation time (from dynamo_timed): 66.083566782 2025-09-07T08:26:04.3503786Z pass 2025-09-07T08:26:04.3507327Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:26:04.3513205Z TIMING: _recursive_pre_grad_passes:0.0771 _recursive_joint_graph_passes:1.15694 _recursive_post_grad_passes:0.28562 linear_unary_template_precompiling:1.45721 bmm_template_precompiling:1.41953 bmm_template_autotuning:0.2646 linear_unary_template_autotuning:1.47532 async_compile.wait:0.90048 code_gen:20.14476 inductor_compile:50.32731 backend_compile:61.35935 gc:0.00027 entire_frame_compile:66.08357 total_wall_time:66.08357 2025-09-07T08:26:04.3516625Z STATS: call_* op count: 923 | FakeTensorMode.__torch_dispatch__:62622 | FakeTensor.__torch_dispatch__:7353 | ProxyTorchDispatchMode.__torch_dispatch__:16180 2025-09-07T08:26:04.3517129Z Dynamo produced 1 graphs covering 923 ops with 0 graph breaks (0 unique) 2025-09-07T08:26:08.0103050Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:26:08.0104296Z import pynvml # type: ignore[import] 2025-09-07T08:26:10.7040217Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:26:10.7041225Z from pkg_resources import resource_filename 2025-09-07T08:26:11.3806704Z 2025-09-07T08:26:14.6127773Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:26:14.6128267Z loading model: 0it [00:03, ?it/s] 2025-09-07T08:26:14.6129254Z cpu eval XLNetLMHeadModel 2025-09-07T08:26:17.1994841Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:26:17.8734469Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:26:18.5218942Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:26:52.9357008Z Autotune Choices Stats: 2025-09-07T08:26:52.9357679Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.06074950010770408} 2025-09-07T08:26:52.9370960Z AUTOTUNE bmm(1x512x1024, 1x1024x1024) 2025-09-07T08:26:52.9371345Z strides: [524288, 1024, 1], [1048576, 1024, 1] 2025-09-07T08:26:52.9371615Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:26:52.9371873Z cpp_CppMicroGemmAMX_0 0.0607 ms 100.0% 2025-09-07T08:26:52.9372106Z bmm 0.1560 ms 38.9% 2025-09-07T08:26:52.9372480Z SingleProcess AUTOTUNE benchmarking takes 0.3010 seconds and 1.5994 seconds precompiling for 2 choices 2025-09-07T08:26:55.1394286Z Autotune Choices Stats: 2025-09-07T08:26:55.1394884Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_2", "best_time": 0.09704550006972568} 2025-09-07T08:26:55.1404957Z AUTOTUNE bmm(16x512x64, 16x64x512) 2025-09-07T08:26:55.1405269Z strides: [32768, 64, 1], [64, 1, 1024] 2025-09-07T08:26:55.1405559Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:26:55.1405873Z cpp_CppMicroGemmAMX_2 0.0970 ms 100.0% 2025-09-07T08:26:55.1406170Z bmm 0.9588 ms 10.1% 2025-09-07T08:26:55.1406664Z SingleProcess AUTOTUNE benchmarking takes 0.3654 seconds and 1.4435 seconds precompiling for 2 choices 2025-09-07T08:26:57.0859184Z Autotune Choices Stats: 2025-09-07T08:26:57.0859671Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_3", "best_time": 0.10854500010282209} 2025-09-07T08:26:57.0872753Z AUTOTUNE bmm(1x1024x1024, 1x1024x1024) 2025-09-07T08:26:57.0873083Z strides: [1048576, 1024, 1], [1048576, 1024, 1] 2025-09-07T08:26:57.0873353Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:26:57.0873600Z cpp_CppMicroGemmAMX_3 0.1085 ms 100.0% 2025-09-07T08:26:57.0873836Z bmm 0.2362 ms 46.0% 2025-09-07T08:26:57.0874203Z SingleProcess AUTOTUNE benchmarking takes 0.3216 seconds and 1.4620 seconds precompiling for 2 choices 2025-09-07T08:26:59.2092948Z Autotune Choices Stats: 2025-09-07T08:26:59.2093555Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.20072899997103377} 2025-09-07T08:26:59.2108926Z AUTOTUNE bmm(16x512x64, 16x64x1024) 2025-09-07T08:26:59.2109238Z strides: [32768, 64, 1], [64, 1, 1024] 2025-09-07T08:26:59.2109892Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:26:59.2110283Z cpp_CppMicroGemmAMX_4 0.2007 ms 100.0% 2025-09-07T08:26:59.2110577Z bmm 1.0689 ms 18.8% 2025-09-07T08:26:59.2111067Z SingleProcess AUTOTUNE benchmarking takes 0.4625 seconds and 1.4478 seconds precompiling for 2 choices 2025-09-07T08:27:01.4711372Z Autotune Choices Stats: 2025-09-07T08:27:01.4711859Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_6", "best_time": 0.029194000035204226} 2025-09-07T08:27:01.4719972Z AUTOTUNE bmm(16x512x512, 16x512x64) 2025-09-07T08:27:01.4720546Z strides: [262144, 512, 1], [64, 1024, 1] 2025-09-07T08:27:01.4720821Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:27:01.4721072Z cpp_CppMicroGemmAMX_6 0.0292 ms 100.0% 2025-09-07T08:27:01.4721311Z bmm 0.9668 ms 3.0% 2025-09-07T08:27:01.4721679Z SingleProcess AUTOTUNE benchmarking takes 0.3685 seconds and 1.4925 seconds precompiling for 2 choices 2025-09-07T08:27:03.4926786Z Autotune Choices Stats: 2025-09-07T08:27:03.4927610Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_7", "best_time": 0.06241350001801038} 2025-09-07T08:27:03.4942676Z AUTOTUNE bmm(1x512x1024, 1x1024x1024) 2025-09-07T08:27:03.4943074Z strides: [0, 1024, 1], [1048576, 1024, 1] 2025-09-07T08:27:03.4943727Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:27:03.4943985Z cpp_CppMicroGemmAMX_7 0.0624 ms 100.0% 2025-09-07T08:27:03.4944319Z bmm 0.1506 ms 41.4% 2025-09-07T08:27:03.4944690Z SingleProcess AUTOTUNE benchmarking takes 0.2999 seconds and 1.5900 seconds precompiling for 2 choices 2025-09-07T08:27:32.7399216Z Autotune Choices Stats: 2025-09-07T08:27:32.7402773Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "_linear_pointwise", "best_time": 2.3574750002808287} 2025-09-07T08:27:32.7412976Z AUTOTUNE linear_unary(512x1024, 32000x1024, 32000) 2025-09-07T08:27:32.7413330Z strides: [1024, 1], [1, 0], [1] 2025-09-07T08:27:32.7414080Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:27:32.7414483Z _linear_pointwise 2.3575 ms 100.0% 2025-09-07T08:27:32.7414816Z cpp_CppMicroGemmAMX_240 2.8788 ms 81.9% 2025-09-07T08:27:32.7415372Z SingleProcess AUTOTUNE benchmarking takes 0.6469 seconds and 1.4708 seconds precompiling for 2 choices 2025-09-07T08:27:34.9225628Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9229902Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9230339Z return mod(**inputs) 2025-09-07T08:27:34.9230880Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9231457Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9231949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-09-07T08:27:34.9232508Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-09-07T08:27:34.9233062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-09-07T08:27:34.9233612Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-09-07T08:27:34.9234171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-09-07T08:27:34.9234760Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-09-07T08:27:34.9235004Z 2025-09-07T08:27:34.9235134Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9235523Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9235923Z return mod(**inputs) 2025-09-07T08:27:34.9236366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9236862Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9237780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-09-07T08:27:34.9238234Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-09-07T08:27:34.9238728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-09-07T08:27:34.9239234Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-09-07T08:27:34.9239780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-09-07T08:27:34.9240340Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-09-07T08:27:34.9240576Z 2025-09-07T08:27:34.9240705Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9241120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9241484Z return mod(**inputs) 2025-09-07T08:27:34.9241909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9242411Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9242877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1307, in forward 2025-09-07T08:27:34.9243332Z word_emb_k = self.word_embedding(input_ids) 2025-09-07T08:27:34.9243497Z 2025-09-07T08:27:34.9243625Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9244033Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9244389Z return mod(**inputs) 2025-09-07T08:27:34.9244816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9245543Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9246003Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9246444Z outputs = layer_module( 2025-09-07T08:27:34.9246855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9247529Z outputs = self.rel_attn( 2025-09-07T08:27:34.9247957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9248447Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9248625Z 2025-09-07T08:27:34.9248758Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9249148Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9249483Z return mod(**inputs) 2025-09-07T08:27:34.9249865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9250273Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9250671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9251085Z outputs = layer_module( 2025-09-07T08:27:34.9251493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9251914Z outputs = self.rel_attn( 2025-09-07T08:27:34.9252319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9252770Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9252952Z 2025-09-07T08:27:34.9253047Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9253352Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9253649Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9254006Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9254344Z return mod(**inputs) 2025-09-07T08:27:34.9254721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9255129Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9255582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9256048Z outputs = layer_module( 2025-09-07T08:27:34.9256450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9256862Z outputs = self.rel_attn( 2025-09-07T08:27:34.9257258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9257777Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9258261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9258813Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9259029Z 2025-09-07T08:27:34.9259144Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9259539Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9259949Z return mod(**inputs) 2025-09-07T08:27:34.9260359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9260805Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9261233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1334, in forward 2025-09-07T08:27:34.9261709Z pos_emb = self.relative_positional_encoding(qlen, klen, bsz=bsz) 2025-09-07T08:27:34.9262229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1157, in relative_positional_encoding 2025-09-07T08:27:34.9262757Z pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz) 2025-09-07T08:27:34.9263261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1115, in positional_embedding 2025-09-07T08:27:34.9263805Z pos_emb = torch.cat([torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)], dim=-1) 2025-09-07T08:27:34.9264026Z 2025-09-07T08:27:34.9264149Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9264534Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9264887Z return mod(**inputs) 2025-09-07T08:27:34.9265285Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9265715Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9266144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9266549Z outputs = layer_module( 2025-09-07T08:27:34.9266955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9267365Z outputs = self.rel_attn( 2025-09-07T08:27:34.9267762Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9268244Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9268456Z 2025-09-07T08:27:34.9268568Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9269004Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9269362Z return mod(**inputs) 2025-09-07T08:27:34.9269771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9270218Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9270660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9271110Z outputs = layer_module( 2025-09-07T08:27:34.9271521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9271947Z outputs = self.rel_attn( 2025-09-07T08:27:34.9272352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9272860Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9273071Z 2025-09-07T08:27:34.9273162Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9273397Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9273929Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9274323Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9274689Z return mod(**inputs) 2025-09-07T08:27:34.9275093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9275535Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9275970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9276398Z outputs = layer_module( 2025-09-07T08:27:34.9276808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9277236Z outputs = self.rel_attn( 2025-09-07T08:27:34.9277646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9278084Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9278540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9279047Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9279254Z 2025-09-07T08:27:34.9279380Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9279780Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9280133Z return mod(**inputs) 2025-09-07T08:27:34.9280537Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9280985Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9281424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9281845Z outputs = layer_module( 2025-09-07T08:27:34.9282252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9282681Z outputs = self.rel_attn( 2025-09-07T08:27:34.9283095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9283556Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9283729Z 2025-09-07T08:27:34.9283821Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9284071Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9284339Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9284782Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9285158Z return mod(**inputs) 2025-09-07T08:27:34.9285564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9286010Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9286451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9286918Z outputs = layer_module( 2025-09-07T08:27:34.9287449Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9287882Z outputs = self.rel_attn( 2025-09-07T08:27:34.9288296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9288731Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9289173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9289679Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9289913Z 2025-09-07T08:27:34.9290032Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9290432Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9290800Z return mod(**inputs) 2025-09-07T08:27:34.9291198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9291642Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9292102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9292513Z outputs = layer_module( 2025-09-07T08:27:34.9292905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9293305Z outputs = self.rel_attn( 2025-09-07T08:27:34.9293700Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9294133Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9294588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9295069Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9295249Z 2025-09-07T08:27:34.9295360Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9295756Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9296116Z return mod(**inputs) 2025-09-07T08:27:34.9296505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9296927Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9297349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9297739Z outputs = layer_module( 2025-09-07T08:27:34.9298113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9298503Z outputs = self.rel_attn( 2025-09-07T08:27:34.9298874Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9299288Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9299754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9301204Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9301391Z 2025-09-07T08:27:34.9301494Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9301864Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9302100Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9302350Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9302726Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9303061Z return mod(**inputs) 2025-09-07T08:27:34.9303467Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9303886Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9304300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9304763Z outputs = layer_module( 2025-09-07T08:27:34.9305160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9305576Z outputs = self.rel_attn( 2025-09-07T08:27:34.9305975Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9306399Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9306555Z 2025-09-07T08:27:34.9306666Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9307039Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9307375Z return mod(**inputs) 2025-09-07T08:27:34.9307747Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9308158Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9308554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9308949Z outputs = layer_module( 2025-09-07T08:27:34.9309322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9309712Z outputs = self.rel_attn( 2025-09-07T08:27:34.9310099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9310556Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9310730Z 2025-09-07T08:27:34.9310819Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9311059Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9311321Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9311703Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9312057Z return mod(**inputs) 2025-09-07T08:27:34.9312451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9312879Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9313305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9313710Z outputs = layer_module( 2025-09-07T08:27:34.9314110Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9314520Z outputs = self.rel_attn( 2025-09-07T08:27:34.9314918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9315329Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9315761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9316294Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9316495Z 2025-09-07T08:27:34.9316618Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9317009Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9317351Z return mod(**inputs) 2025-09-07T08:27:34.9317746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9318193Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9318623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9319047Z outputs = layer_module( 2025-09-07T08:27:34.9319452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9319875Z outputs = self.rel_attn( 2025-09-07T08:27:34.9320276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9320773Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9325126Z 2025-09-07T08:27:34.9325239Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9325483Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9325760Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9326178Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9326546Z return mod(**inputs) 2025-09-07T08:27:34.9326956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9327540Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9327999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9328450Z outputs = layer_module( 2025-09-07T08:27:34.9328879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9329341Z outputs = self.rel_attn( 2025-09-07T08:27:34.9329753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9330194Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9330631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9331125Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9331345Z 2025-09-07T08:27:34.9331466Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9331871Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9332213Z return mod(**inputs) 2025-09-07T08:27:34.9332600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9333058Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9333503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9333939Z outputs = layer_module( 2025-09-07T08:27:34.9334343Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9334762Z outputs = self.rel_attn( 2025-09-07T08:27:34.9335181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9335647Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9335867Z 2025-09-07T08:27:34.9335968Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9336204Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9336466Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9336887Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9337264Z return mod(**inputs) 2025-09-07T08:27:34.9337693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9338133Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9338570Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9338985Z outputs = layer_module( 2025-09-07T08:27:34.9339382Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9339806Z outputs = self.rel_attn( 2025-09-07T08:27:34.9340264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9340703Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9341172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9341751Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9341951Z 2025-09-07T08:27:34.9342077Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9342478Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9342848Z return mod(**inputs) 2025-09-07T08:27:34.9343274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9343734Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9344168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9344615Z outputs = layer_module( 2025-09-07T08:27:34.9345171Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9345615Z outputs = self.rel_attn( 2025-09-07T08:27:34.9346027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9346472Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9346945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9347444Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9347634Z 2025-09-07T08:27:34.9347762Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9348166Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9348528Z return mod(**inputs) 2025-09-07T08:27:34.9348933Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9349382Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9349814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9350201Z outputs = layer_module( 2025-09-07T08:27:34.9350600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9351015Z outputs = self.rel_attn( 2025-09-07T08:27:34.9351416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9352014Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9352459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9352914Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9353091Z 2025-09-07T08:27:34.9353177Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9353399Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9353621Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9353912Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9354303Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9354658Z return mod(**inputs) 2025-09-07T08:27:34.9355055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9355479Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9355908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9356322Z outputs = layer_module( 2025-09-07T08:27:34.9356753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9357187Z outputs = self.rel_attn( 2025-09-07T08:27:34.9357565Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9357986Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9358141Z 2025-09-07T08:27:34.9358260Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9358628Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9358962Z return mod(**inputs) 2025-09-07T08:27:34.9359333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9359744Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9360172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9360586Z outputs = layer_module( 2025-09-07T08:27:34.9360972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9361385Z outputs = self.rel_attn( 2025-09-07T08:27:34.9361782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9362227Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9362394Z 2025-09-07T08:27:34.9362483Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9362714Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9362972Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9363352Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9363699Z return mod(**inputs) 2025-09-07T08:27:34.9364083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9364512Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9364938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9365345Z outputs = layer_module( 2025-09-07T08:27:34.9365730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9366151Z outputs = self.rel_attn( 2025-09-07T08:27:34.9366545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9366987Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9367528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9368052Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9368273Z 2025-09-07T08:27:34.9368392Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9368817Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9369177Z return mod(**inputs) 2025-09-07T08:27:34.9369575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9370038Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9370470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9370885Z outputs = layer_module( 2025-09-07T08:27:34.9371282Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9371709Z outputs = self.rel_attn( 2025-09-07T08:27:34.9372109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9372613Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9372834Z 2025-09-07T08:27:34.9372934Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9373166Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9373415Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9373820Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9374172Z return mod(**inputs) 2025-09-07T08:27:34.9374590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9375038Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9375465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9375897Z outputs = layer_module( 2025-09-07T08:27:34.9376303Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9376740Z outputs = self.rel_attn( 2025-09-07T08:27:34.9377154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9377591Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9378029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9378524Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9378723Z 2025-09-07T08:27:34.9378845Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9379223Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9379606Z return mod(**inputs) 2025-09-07T08:27:34.9379995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9380403Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9380828Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9381249Z outputs = layer_module( 2025-09-07T08:27:34.9381639Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9382057Z outputs = self.rel_attn( 2025-09-07T08:27:34.9382529Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9382943Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9383111Z 2025-09-07T08:27:34.9383196Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9383417Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9383656Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9384054Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9384417Z return mod(**inputs) 2025-09-07T08:27:34.9384811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9385252Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9385684Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9386073Z outputs = layer_module( 2025-09-07T08:27:34.9386445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9386852Z outputs = self.rel_attn( 2025-09-07T08:27:34.9387245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9387638Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9388033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9388493Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9388680Z 2025-09-07T08:27:34.9388786Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9389152Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9389480Z return mod(**inputs) 2025-09-07T08:27:34.9389850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9390256Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9390658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9391049Z outputs = layer_module( 2025-09-07T08:27:34.9391421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9391797Z outputs = self.rel_attn( 2025-09-07T08:27:34.9392160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9392563Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9392981Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9393419Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9393594Z 2025-09-07T08:27:34.9393698Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9394063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9394408Z return mod(**inputs) 2025-09-07T08:27:34.9394758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9395153Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9395546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9395936Z outputs = layer_module( 2025-09-07T08:27:34.9396304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9396713Z outputs = self.rel_attn( 2025-09-07T08:27:34.9397090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9397509Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9397927Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9398383Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9398547Z 2025-09-07T08:27:34.9398630Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9398849Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9399064Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9399303Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9399656Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9399993Z return mod(**inputs) 2025-09-07T08:27:34.9400371Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9400783Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9401220Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9401655Z outputs = layer_module( 2025-09-07T08:27:34.9402052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9402471Z outputs = self.rel_attn( 2025-09-07T08:27:34.9402868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9403301Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9403470Z 2025-09-07T08:27:34.9403585Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9403970Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9404321Z return mod(**inputs) 2025-09-07T08:27:34.9404711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9405131Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9405557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9405980Z outputs = layer_module( 2025-09-07T08:27:34.9406384Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9406815Z outputs = self.rel_attn( 2025-09-07T08:27:34.9407345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9407824Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9408007Z 2025-09-07T08:27:34.9408103Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9408344Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9408612Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9408990Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9409328Z return mod(**inputs) 2025-09-07T08:27:34.9409708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9410121Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9410523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9410917Z outputs = layer_module( 2025-09-07T08:27:34.9411295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9411708Z outputs = self.rel_attn( 2025-09-07T08:27:34.9412078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9412474Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9412881Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9413362Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9413554Z 2025-09-07T08:27:34.9413670Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9414035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9414376Z return mod(**inputs) 2025-09-07T08:27:34.9414752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9415170Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9415581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9415986Z outputs = layer_module( 2025-09-07T08:27:34.9416379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9416768Z outputs = self.rel_attn( 2025-09-07T08:27:34.9417148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9417579Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9417769Z 2025-09-07T08:27:34.9417850Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9418063Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9418297Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9418662Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9418986Z return mod(**inputs) 2025-09-07T08:27:34.9419358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9419770Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9420163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9420532Z outputs = layer_module( 2025-09-07T08:27:34.9420898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9421288Z outputs = self.rel_attn( 2025-09-07T08:27:34.9421660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9422055Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9422452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9422920Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9423110Z 2025-09-07T08:27:34.9423214Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9423572Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9423894Z return mod(**inputs) 2025-09-07T08:27:34.9424247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9424642Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9425031Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9425435Z outputs = layer_module( 2025-09-07T08:27:34.9425792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9426174Z outputs = self.rel_attn( 2025-09-07T08:27:34.9426541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9426956Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9427109Z 2025-09-07T08:27:34.9427220Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9427429Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9427665Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9428017Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9428343Z return mod(**inputs) 2025-09-07T08:27:34.9428699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9429093Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9429482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9429881Z outputs = layer_module( 2025-09-07T08:27:34.9430268Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9430652Z outputs = self.rel_attn( 2025-09-07T08:27:34.9431038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9431450Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9431860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9432302Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9432478Z 2025-09-07T08:27:34.9432582Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9432935Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9433254Z return mod(**inputs) 2025-09-07T08:27:34.9433617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9434002Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9434392Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9434773Z outputs = layer_module( 2025-09-07T08:27:34.9435137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9435527Z outputs = self.rel_attn( 2025-09-07T08:27:34.9435890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9436303Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9436734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9437186Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9437357Z 2025-09-07T08:27:34.9437471Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9437832Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9438153Z return mod(**inputs) 2025-09-07T08:27:34.9438513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9438911Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9439299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9439706Z outputs = layer_module( 2025-09-07T08:27:34.9440062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9440443Z outputs = self.rel_attn( 2025-09-07T08:27:34.9440809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9441222Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9441661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9442112Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9442278Z 2025-09-07T08:27:34.9442371Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9442596Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9442821Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9443072Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9443463Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9443845Z return mod(**inputs) 2025-09-07T08:27:34.9444258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9444697Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9445341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9445778Z outputs = layer_module( 2025-09-07T08:27:34.9446169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9446597Z outputs = self.rel_attn( 2025-09-07T08:27:34.9447018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9447565Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9447740Z 2025-09-07T08:27:34.9447864Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9448256Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9448597Z return mod(**inputs) 2025-09-07T08:27:34.9448951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9449342Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9449728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9450112Z outputs = layer_module( 2025-09-07T08:27:34.9450488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9450880Z outputs = self.rel_attn( 2025-09-07T08:27:34.9451252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9451664Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9451822Z 2025-09-07T08:27:34.9451902Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9452109Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9452343Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9452687Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9452995Z return mod(**inputs) 2025-09-07T08:27:34.9453344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9453732Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9454199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9454582Z outputs = layer_module( 2025-09-07T08:27:34.9454958Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9455357Z outputs = self.rel_attn( 2025-09-07T08:27:34.9455738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9456168Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9456576Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9457048Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9457244Z 2025-09-07T08:27:34.9457352Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9457719Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9458050Z return mod(**inputs) 2025-09-07T08:27:34.9458453Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9458891Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9459295Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9459687Z outputs = layer_module( 2025-09-07T08:27:34.9460053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9460442Z outputs = self.rel_attn( 2025-09-07T08:27:34.9460817Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9461272Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9461462Z 2025-09-07T08:27:34.9461554Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9461766Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9462014Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9462380Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9462716Z return mod(**inputs) 2025-09-07T08:27:34.9463084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9463489Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9463893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9464283Z outputs = layer_module( 2025-09-07T08:27:34.9464658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9465021Z outputs = self.rel_attn( 2025-09-07T08:27:34.9465379Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9465752Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9466138Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9466579Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9466756Z 2025-09-07T08:27:34.9466857Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9467206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9467524Z return mod(**inputs) 2025-09-07T08:27:34.9467873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9468276Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9468655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9469023Z outputs = layer_module( 2025-09-07T08:27:34.9469377Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9469746Z outputs = self.rel_attn( 2025-09-07T08:27:34.9470109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9470515Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9470672Z 2025-09-07T08:27:34.9470752Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9470959Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9471191Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9471532Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9471849Z return mod(**inputs) 2025-09-07T08:27:34.9472213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9472640Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9473018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9473392Z outputs = layer_module( 2025-09-07T08:27:34.9473746Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9474117Z outputs = self.rel_attn( 2025-09-07T08:27:34.9474469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9474836Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9475223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9475658Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9475832Z 2025-09-07T08:27:34.9475944Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9476291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9476598Z return mod(**inputs) 2025-09-07T08:27:34.9476952Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9477337Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9477718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9478081Z outputs = layer_module( 2025-09-07T08:27:34.9478438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9478805Z outputs = self.rel_attn( 2025-09-07T08:27:34.9479163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9479556Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9479957Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9480396Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9480571Z 2025-09-07T08:27:34.9480676Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9481041Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9481373Z return mod(**inputs) 2025-09-07T08:27:34.9481733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9482165Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9482568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9482956Z outputs = layer_module( 2025-09-07T08:27:34.9483323Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9483731Z outputs = self.rel_attn( 2025-09-07T08:27:34.9484104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9484511Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9484942Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9485384Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9485562Z 2025-09-07T08:27:34.9485651Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9485881Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9486126Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9486375Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9486785Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9487242Z return mod(**inputs) 2025-09-07T08:27:34.9487652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9488096Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9488530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9488962Z outputs = layer_module( 2025-09-07T08:27:34.9489334Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9489724Z outputs = self.rel_attn( 2025-09-07T08:27:34.9490094Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9490523Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9490686Z 2025-09-07T08:27:34.9490803Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9491190Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9491539Z return mod(**inputs) 2025-09-07T08:27:34.9491920Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9492345Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9492774Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9493164Z outputs = layer_module( 2025-09-07T08:27:34.9493531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9493909Z outputs = self.rel_attn( 2025-09-07T08:27:34.9494280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9494699Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9494852Z 2025-09-07T08:27:34.9494942Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9495152Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9495397Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9495759Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9496091Z return mod(**inputs) 2025-09-07T08:27:34.9496491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9496893Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9497306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9497704Z outputs = layer_module( 2025-09-07T08:27:34.9498096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9498484Z outputs = self.rel_attn( 2025-09-07T08:27:34.9498865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9499261Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9499671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9500144Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9500329Z 2025-09-07T08:27:34.9500432Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9500813Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9501166Z return mod(**inputs) 2025-09-07T08:27:34.9501541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9501941Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9502327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9502711Z outputs = layer_module( 2025-09-07T08:27:34.9503075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9503458Z outputs = self.rel_attn( 2025-09-07T08:27:34.9503820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9504270Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9504464Z 2025-09-07T08:27:34.9504549Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9504766Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9505006Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9505359Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9505691Z return mod(**inputs) 2025-09-07T08:27:34.9506054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9506453Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9506843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9507224Z outputs = layer_module( 2025-09-07T08:27:34.9507591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9507980Z outputs = self.rel_attn( 2025-09-07T08:27:34.9508349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9508741Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9509137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9509591Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9509774Z 2025-09-07T08:27:34.9509888Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9510248Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9510585Z return mod(**inputs) 2025-09-07T08:27:34.9510955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9511412Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9511816Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9512198Z outputs = layer_module( 2025-09-07T08:27:34.9512598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9512983Z outputs = self.rel_attn( 2025-09-07T08:27:34.9513352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9513767Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9513925Z 2025-09-07T08:27:34.9514007Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9514224Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9514465Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9514845Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9515193Z return mod(**inputs) 2025-09-07T08:27:34.9515569Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9515989Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9516408Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9516816Z outputs = layer_module( 2025-09-07T08:27:34.9517199Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9517591Z outputs = self.rel_attn( 2025-09-07T08:27:34.9517966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9518362Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9518769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9519219Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9519408Z 2025-09-07T08:27:34.9519518Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9519884Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9520224Z return mod(**inputs) 2025-09-07T08:27:34.9520602Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9521051Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9521492Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9521916Z outputs = layer_module( 2025-09-07T08:27:34.9522325Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9522745Z outputs = self.rel_attn( 2025-09-07T08:27:34.9523160Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9523608Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9524078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9524569Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9524753Z 2025-09-07T08:27:34.9524869Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9525284Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9525646Z return mod(**inputs) 2025-09-07T08:27:34.9526047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9526479Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9526930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9527456Z outputs = layer_module( 2025-09-07T08:27:34.9527873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9528302Z outputs = self.rel_attn( 2025-09-07T08:27:34.9528708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9529133Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9529541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9529995Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9530175Z 2025-09-07T08:27:34.9530269Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9530483Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9530701Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9530947Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9531315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9531643Z return mod(**inputs) 2025-09-07T08:27:34.9532019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9532415Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9532804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9533174Z outputs = layer_module( 2025-09-07T08:27:34.9533539Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9533937Z outputs = self.rel_attn( 2025-09-07T08:27:34.9534310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9534731Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9534885Z 2025-09-07T08:27:34.9534991Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9535356Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9535688Z return mod(**inputs) 2025-09-07T08:27:34.9536060Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9536469Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9536870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9537262Z outputs = layer_module( 2025-09-07T08:27:34.9537635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9538027Z outputs = self.rel_attn( 2025-09-07T08:27:34.9538403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9538818Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9538983Z 2025-09-07T08:27:34.9539066Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9539285Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9539550Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9539911Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9540245Z return mod(**inputs) 2025-09-07T08:27:34.9540624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9541059Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9541506Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9541912Z outputs = layer_module( 2025-09-07T08:27:34.9542308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9542722Z outputs = self.rel_attn( 2025-09-07T08:27:34.9543098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9543484Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9543890Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9544373Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9544580Z 2025-09-07T08:27:34.9544697Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9545264Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9545602Z return mod(**inputs) 2025-09-07T08:27:34.9545979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9546395Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9546790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9547173Z outputs = layer_module( 2025-09-07T08:27:34.9547542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9547919Z outputs = self.rel_attn( 2025-09-07T08:27:34.9548287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9548722Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9548906Z 2025-09-07T08:27:34.9548986Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9549194Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9549424Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9549777Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9550103Z return mod(**inputs) 2025-09-07T08:27:34.9550465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9550873Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9551299Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9551715Z outputs = layer_module( 2025-09-07T08:27:34.9552100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9552511Z outputs = self.rel_attn( 2025-09-07T08:27:34.9552876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9553260Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9553656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9554165Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9554353Z 2025-09-07T08:27:34.9554456Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9554807Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9555124Z return mod(**inputs) 2025-09-07T08:27:34.9555480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9555893Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9556290Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9556678Z outputs = layer_module( 2025-09-07T08:27:34.9557049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9557427Z outputs = self.rel_attn( 2025-09-07T08:27:34.9557790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9558192Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9558343Z 2025-09-07T08:27:34.9558462Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9558697Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9558920Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9559268Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9559586Z return mod(**inputs) 2025-09-07T08:27:34.9559937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9560322Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9560715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9561098Z outputs = layer_module( 2025-09-07T08:27:34.9561459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9561841Z outputs = self.rel_attn( 2025-09-07T08:27:34.9562203Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9562611Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9563009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9563454Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9563629Z 2025-09-07T08:27:34.9563739Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9564087Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9564410Z return mod(**inputs) 2025-09-07T08:27:34.9564772Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9565170Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9565571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9565965Z outputs = layer_module( 2025-09-07T08:27:34.9566344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9566760Z outputs = self.rel_attn( 2025-09-07T08:27:34.9567251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9567697Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9568148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9568670Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9568869Z 2025-09-07T08:27:34.9569000Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9569362Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9569684Z return mod(**inputs) 2025-09-07T08:27:34.9570070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9570466Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9570891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9571267Z outputs = layer_module( 2025-09-07T08:27:34.9571631Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9572007Z outputs = self.rel_attn( 2025-09-07T08:27:34.9572373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9572793Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9573245Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9573682Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9573855Z 2025-09-07T08:27:34.9573938Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9574155Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9574371Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9574600Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9574958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9575282Z return mod(**inputs) 2025-09-07T08:27:34.9575644Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9576034Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9576432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9576815Z outputs = layer_module( 2025-09-07T08:27:34.9577191Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9577573Z outputs = self.rel_attn( 2025-09-07T08:27:34.9577930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9578336Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9578495Z 2025-09-07T08:27:34.9578601Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9578958Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9579274Z return mod(**inputs) 2025-09-07T08:27:34.9579638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9580035Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9580428Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9580809Z outputs = layer_module( 2025-09-07T08:27:34.9581169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9581556Z outputs = self.rel_attn( 2025-09-07T08:27:34.9581945Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9582377Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9582530Z 2025-09-07T08:27:34.9582621Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9582829Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9583069Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9583426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9583751Z return mod(**inputs) 2025-09-07T08:27:34.9584128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9584524Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9584917Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9585298Z outputs = layer_module( 2025-09-07T08:27:34.9585650Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9586014Z outputs = self.rel_attn( 2025-09-07T08:27:34.9586389Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9586765Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9587166Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9587603Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9587794Z 2025-09-07T08:27:34.9587895Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9588240Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9588556Z return mod(**inputs) 2025-09-07T08:27:34.9588905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9589288Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9589670Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9590044Z outputs = layer_module( 2025-09-07T08:27:34.9590404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9590784Z outputs = self.rel_attn( 2025-09-07T08:27:34.9591140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9591580Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9591771Z 2025-09-07T08:27:34.9591854Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9592069Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9592298Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9592654Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9592990Z return mod(**inputs) 2025-09-07T08:27:34.9593373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9593772Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9594158Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9594550Z outputs = layer_module( 2025-09-07T08:27:34.9594902Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9595270Z outputs = self.rel_attn( 2025-09-07T08:27:34.9595614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9596024Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9596426Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9596889Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9597076Z 2025-09-07T08:27:34.9597191Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9597565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9597887Z return mod(**inputs) 2025-09-07T08:27:34.9598248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9598641Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9599026Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9599397Z outputs = layer_module( 2025-09-07T08:27:34.9599769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9600143Z outputs = self.rel_attn( 2025-09-07T08:27:34.9600520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9600937Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9601101Z 2025-09-07T08:27:34.9601184Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9601399Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9601641Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9602010Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9602348Z return mod(**inputs) 2025-09-07T08:27:34.9602739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9603169Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9603591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9604014Z outputs = layer_module( 2025-09-07T08:27:34.9604409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9604821Z outputs = self.rel_attn( 2025-09-07T08:27:34.9605233Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9605657Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9606076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9606571Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9606769Z 2025-09-07T08:27:34.9606886Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9607384Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9607759Z return mod(**inputs) 2025-09-07T08:27:34.9608161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9608610Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9609022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9609405Z outputs = layer_module( 2025-09-07T08:27:34.9609826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9610259Z outputs = self.rel_attn( 2025-09-07T08:27:34.9610625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9611068Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9611489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9611925Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9612101Z 2025-09-07T08:27:34.9612206Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9612585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9612918Z return mod(**inputs) 2025-09-07T08:27:34.9613277Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9613666Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9614062Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9614458Z outputs = layer_module( 2025-09-07T08:27:34.9614812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9615198Z outputs = self.rel_attn( 2025-09-07T08:27:34.9615597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9616003Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9616429Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9616859Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9617020Z 2025-09-07T08:27:34.9617100Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9617309Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9617517Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9617746Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9618084Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9618401Z return mod(**inputs) 2025-09-07T08:27:34.9618757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9619141Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9619523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9619885Z outputs = layer_module( 2025-09-07T08:27:34.9620243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9620610Z outputs = self.rel_attn( 2025-09-07T08:27:34.9620968Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9621367Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9621515Z 2025-09-07T08:27:34.9621617Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9621963Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9622283Z return mod(**inputs) 2025-09-07T08:27:34.9622638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9623018Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9623400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9623771Z outputs = layer_module( 2025-09-07T08:27:34.9624128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9624524Z outputs = self.rel_attn( 2025-09-07T08:27:34.9624878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9625286Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9625451Z 2025-09-07T08:27:34.9625534Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9625744Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9625989Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9626337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9626653Z return mod(**inputs) 2025-09-07T08:27:34.9627010Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9627394Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9627769Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9627848Z outputs = layer_module( 2025-09-07T08:27:34.9628106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9628200Z outputs = self.rel_attn( 2025-09-07T08:27:34.9628441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9628516Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9628785Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9628916Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9628919Z 2025-09-07T08:27:34.9629028Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9629220Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9629288Z return mod(**inputs) 2025-09-07T08:27:34.9629542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9629625Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9629873Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9629944Z outputs = layer_module( 2025-09-07T08:27:34.9630193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9630262Z outputs = self.rel_attn( 2025-09-07T08:27:34.9630502Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9630643Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9630648Z 2025-09-07T08:27:34.9630727Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9630812Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9630915Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9631105Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9631181Z return mod(**inputs) 2025-09-07T08:27:34.9631427Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9631516Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9631758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9631825Z outputs = layer_module( 2025-09-07T08:27:34.9632073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9632160Z outputs = self.rel_attn( 2025-09-07T08:27:34.9632407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9632481Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9632748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9632888Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9632892Z 2025-09-07T08:27:34.9632993Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9633189Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9633256Z return mod(**inputs) 2025-09-07T08:27:34.9633505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9633587Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9633829Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9633918Z outputs = layer_module( 2025-09-07T08:27:34.9634164Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9634259Z outputs = self.rel_attn( 2025-09-07T08:27:34.9634499Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9634605Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9634608Z 2025-09-07T08:27:34.9634688Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9634765Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9634872Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9635065Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9635137Z return mod(**inputs) 2025-09-07T08:27:34.9635380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9635466Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9635719Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9635788Z outputs = layer_module( 2025-09-07T08:27:34.9636035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9636105Z outputs = self.rel_attn( 2025-09-07T08:27:34.9636345Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9636423Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9636683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9636812Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9636817Z 2025-09-07T08:27:34.9636916Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9637114Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9637180Z return mod(**inputs) 2025-09-07T08:27:34.9637423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9637514Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9637759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9637834Z outputs = layer_module( 2025-09-07T08:27:34.9638095Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9638164Z outputs = self.rel_attn( 2025-09-07T08:27:34.9638411Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9638501Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9638795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9638910Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9638914Z 2025-09-07T08:27:34.9639020Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9639212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9639277Z return mod(**inputs) 2025-09-07T08:27:34.9639526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9639609Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9639872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9639956Z outputs = layer_module( 2025-09-07T08:27:34.9640198Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9640277Z outputs = self.rel_attn( 2025-09-07T08:27:34.9640521Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9640618Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9640885Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9640998Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9641023Z 2025-09-07T08:27:34.9641104Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9641183Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9641271Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9641374Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9641579Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9641645Z return mod(**inputs) 2025-09-07T08:27:34.9641894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9641985Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9642232Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9642308Z outputs = layer_module( 2025-09-07T08:27:34.9642559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9642628Z outputs = self.rel_attn( 2025-09-07T08:27:34.9642884Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9642985Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9642988Z 2025-09-07T08:27:34.9643098Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9643297Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9643365Z return mod(**inputs) 2025-09-07T08:27:34.9643620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9643706Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9643959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9644045Z outputs = layer_module( 2025-09-07T08:27:34.9644301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9644371Z outputs = self.rel_attn( 2025-09-07T08:27:34.9644624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9644753Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9644757Z 2025-09-07T08:27:34.9644841Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9644929Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9645193Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9645404Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9645484Z return mod(**inputs) 2025-09-07T08:27:34.9645745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9645842Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9646170Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9646273Z outputs = layer_module( 2025-09-07T08:27:34.9646554Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9646629Z outputs = self.rel_attn( 2025-09-07T08:27:34.9646908Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9646990Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9647368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9647525Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9647529Z 2025-09-07T08:27:34.9647645Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9647872Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9647948Z return mod(**inputs) 2025-09-07T08:27:34.9648235Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9648330Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9648599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9648675Z outputs = layer_module( 2025-09-07T08:27:34.9648976Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9649057Z outputs = self.rel_attn( 2025-09-07T08:27:34.9649304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9649447Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9649450Z 2025-09-07T08:27:34.9649533Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9649613Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9649725Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9649925Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9650002Z return mod(**inputs) 2025-09-07T08:27:34.9650258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9650343Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9650603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9650706Z outputs = layer_module( 2025-09-07T08:27:34.9650969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9651042Z outputs = self.rel_attn( 2025-09-07T08:27:34.9651297Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9651403Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9651676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9651824Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9651827Z 2025-09-07T08:27:34.9651932Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9652139Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9652209Z return mod(**inputs) 2025-09-07T08:27:34.9652460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9652571Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9652840Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9652918Z outputs = layer_module( 2025-09-07T08:27:34.9653167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9653237Z outputs = self.rel_attn( 2025-09-07T08:27:34.9653489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9653590Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9653595Z 2025-09-07T08:27:34.9653683Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9653763Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9653865Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9654070Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9654138Z return mod(**inputs) 2025-09-07T08:27:34.9654395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9654480Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9654738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9654808Z outputs = layer_module( 2025-09-07T08:27:34.9655059Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9655138Z outputs = self.rel_attn( 2025-09-07T08:27:34.9655383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9655465Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9655728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9655854Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9655858Z 2025-09-07T08:27:34.9655969Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9656164Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9656238Z return mod(**inputs) 2025-09-07T08:27:34.9656486Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9656568Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9656843Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9656912Z outputs = layer_module( 2025-09-07T08:27:34.9657173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9657243Z outputs = self.rel_attn( 2025-09-07T08:27:34.9657513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9657605Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9657872Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9657990Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9657994Z 2025-09-07T08:27:34.9658094Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9658298Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9658365Z return mod(**inputs) 2025-09-07T08:27:34.9658657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9658768Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9659018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9659095Z outputs = layer_module( 2025-09-07T08:27:34.9659344Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9659420Z outputs = self.rel_attn( 2025-09-07T08:27:34.9659667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9659758Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9660034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9660147Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9660151Z 2025-09-07T08:27:34.9660239Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9660319Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9660397Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9660509Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9660704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9660778Z return mod(**inputs) 2025-09-07T08:27:34.9661029Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9661113Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9661368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9661436Z outputs = layer_module( 2025-09-07T08:27:34.9661693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9661764Z outputs = self.rel_attn( 2025-09-07T08:27:34.9662018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9662115Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9662119Z 2025-09-07T08:27:34.9662221Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9662426Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9662492Z return mod(**inputs) 2025-09-07T08:27:34.9662748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9662851Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9663100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9663179Z outputs = layer_module( 2025-09-07T08:27:34.9663422Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9663514Z outputs = self.rel_attn( 2025-09-07T08:27:34.9663763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9663862Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9663872Z 2025-09-07T08:27:34.9663952Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9664030Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9664140Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9664337Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9664409Z return mod(**inputs) 2025-09-07T08:27:34.9664681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9664781Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9665036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9665104Z outputs = layer_module( 2025-09-07T08:27:34.9665360Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9665429Z outputs = self.rel_attn( 2025-09-07T08:27:34.9665679Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9665762Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9666018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9666153Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9666158Z 2025-09-07T08:27:34.9666258Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9666453Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9666518Z return mod(**inputs) 2025-09-07T08:27:34.9666759Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9666847Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9667098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9667174Z outputs = layer_module( 2025-09-07T08:27:34.9667414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9667482Z outputs = self.rel_attn( 2025-09-07T08:27:34.9667733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9667860Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9667865Z 2025-09-07T08:27:34.9667951Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9668028Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9668128Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9668324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9668389Z return mod(**inputs) 2025-09-07T08:27:34.9668658Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9668740Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9668984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9669060Z outputs = layer_module( 2025-09-07T08:27:34.9669302Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9669405Z outputs = self.rel_attn( 2025-09-07T08:27:34.9669656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9669737Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9669998Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9670125Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9670128Z 2025-09-07T08:27:34.9670237Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9670457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9670547Z return mod(**inputs) 2025-09-07T08:27:34.9670788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9670872Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9671119Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9671188Z outputs = layer_module( 2025-09-07T08:27:34.9671433Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9671500Z outputs = self.rel_attn( 2025-09-07T08:27:34.9671757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9671858Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9671862Z 2025-09-07T08:27:34.9671943Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9672031Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9672134Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9672344Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9672410Z return mod(**inputs) 2025-09-07T08:27:34.9672652Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9672741Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9672983Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9673060Z outputs = layer_module( 2025-09-07T08:27:34.9673298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9673367Z outputs = self.rel_attn( 2025-09-07T08:27:34.9673615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9673687Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9673953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9674071Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9674074Z 2025-09-07T08:27:34.9674179Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9674369Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9674453Z return mod(**inputs) 2025-09-07T08:27:34.9674704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9674786Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9675036Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9675105Z outputs = layer_module( 2025-09-07T08:27:34.9675361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9675437Z outputs = self.rel_attn( 2025-09-07T08:27:34.9675681Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9675775Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9676034Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9676153Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9676156Z 2025-09-07T08:27:34.9676254Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9676461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9676551Z return mod(**inputs) 2025-09-07T08:27:34.9676795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9676883Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9677124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9677191Z outputs = layer_module( 2025-09-07T08:27:34.9677437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9677506Z outputs = self.rel_attn( 2025-09-07T08:27:34.9677752Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9677838Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9678101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9678218Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9678221Z 2025-09-07T08:27:34.9678299Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9678385Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9678461Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9678569Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9678760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9678825Z return mod(**inputs) 2025-09-07T08:27:34.9679076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9679156Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9679407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9679475Z outputs = layer_module( 2025-09-07T08:27:34.9679715Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9679792Z outputs = self.rel_attn( 2025-09-07T08:27:34.9680061Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9680173Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9680177Z 2025-09-07T08:27:34.9680288Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9680521Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9680603Z return mod(**inputs) 2025-09-07T08:27:34.9680876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9680975Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9681262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9681346Z outputs = layer_module( 2025-09-07T08:27:34.9681614Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9681689Z outputs = self.rel_attn( 2025-09-07T08:27:34.9681963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9682074Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9682078Z 2025-09-07T08:27:34.9682172Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9682567Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9682698Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9682935Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9683010Z return mod(**inputs) 2025-09-07T08:27:34.9683289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9683381Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9683660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9683737Z outputs = layer_module( 2025-09-07T08:27:34.9684006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9684094Z outputs = self.rel_attn( 2025-09-07T08:27:34.9684361Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9684448Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9684734Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9684875Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9684879Z 2025-09-07T08:27:34.9684997Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9685206Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9685286Z return mod(**inputs) 2025-09-07T08:27:34.9685558Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9685650Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9685928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9686003Z outputs = layer_module( 2025-09-07T08:27:34.9686279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9686353Z outputs = self.rel_attn( 2025-09-07T08:27:34.9686627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9686767Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9686771Z 2025-09-07T08:27:34.9686858Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9686954Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9687067Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9687420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9687495Z return mod(**inputs) 2025-09-07T08:27:34.9687780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9687883Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9688180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9688267Z outputs = layer_module( 2025-09-07T08:27:34.9688542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9688613Z outputs = self.rel_attn( 2025-09-07T08:27:34.9688870Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9688945Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9689230Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9689383Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9689404Z 2025-09-07T08:27:34.9689520Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9689727Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9689800Z return mod(**inputs) 2025-09-07T08:27:34.9690080Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9690171Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9690450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9690525Z outputs = layer_module( 2025-09-07T08:27:34.9690793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9690877Z outputs = self.rel_attn( 2025-09-07T08:27:34.9691153Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9691263Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9691267Z 2025-09-07T08:27:34.9691350Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9691439Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9691544Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9691746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9691820Z return mod(**inputs) 2025-09-07T08:27:34.9692075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9692170Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9692423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9692497Z outputs = layer_module( 2025-09-07T08:27:34.9692760Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9692830Z outputs = self.rel_attn( 2025-09-07T08:27:34.9693093Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9693169Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9693437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9693570Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9693599Z 2025-09-07T08:27:34.9693706Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9693914Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9693985Z return mod(**inputs) 2025-09-07T08:27:34.9694250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9694336Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9694606Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9694687Z outputs = layer_module( 2025-09-07T08:27:34.9694948Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9695025Z outputs = self.rel_attn( 2025-09-07T08:27:34.9695272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9695364Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9695667Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9695799Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9695802Z 2025-09-07T08:27:34.9695916Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9696120Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9696198Z return mod(**inputs) 2025-09-07T08:27:34.9696462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9696549Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9696811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9696883Z outputs = layer_module( 2025-09-07T08:27:34.9697144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9697218Z outputs = self.rel_attn( 2025-09-07T08:27:34.9697473Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9697573Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9697846Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9697966Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9697969Z 2025-09-07T08:27:34.9698052Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9698133Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9698223Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9698328Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9698547Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9698618Z return mod(**inputs) 2025-09-07T08:27:34.9698875Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9698969Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9699223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9699302Z outputs = layer_module( 2025-09-07T08:27:34.9699555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9699635Z outputs = self.rel_attn( 2025-09-07T08:27:34.9699887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9700004Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9700008Z 2025-09-07T08:27:34.9700122Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9700325Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9700403Z return mod(**inputs) 2025-09-07T08:27:34.9700674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9700763Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9701021Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9701092Z outputs = layer_module( 2025-09-07T08:27:34.9701352Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9701426Z outputs = self.rel_attn( 2025-09-07T08:27:34.9701687Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9701805Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9701832Z 2025-09-07T08:27:34.9701916Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9702003Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9702110Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9702318Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9702387Z return mod(**inputs) 2025-09-07T08:27:34.9702648Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9702742Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9703006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9703085Z outputs = layer_module( 2025-09-07T08:27:34.9703346Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9703420Z outputs = self.rel_attn( 2025-09-07T08:27:34.9703688Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9703766Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9704053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9704187Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9704190Z 2025-09-07T08:27:34.9704302Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9704509Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9704577Z return mod(**inputs) 2025-09-07T08:27:34.9704849Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9704939Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9705209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9705281Z outputs = layer_module( 2025-09-07T08:27:34.9705541Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9705619Z outputs = self.rel_attn( 2025-09-07T08:27:34.9705878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9706019Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9706042Z 2025-09-07T08:27:34.9706127Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9706216Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9706322Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9706520Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9706597Z return mod(**inputs) 2025-09-07T08:27:34.9706867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9706961Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9707214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9707285Z outputs = layer_module( 2025-09-07T08:27:34.9707545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9707614Z outputs = self.rel_attn( 2025-09-07T08:27:34.9707860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9707950Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9708246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9708388Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9708391Z 2025-09-07T08:27:34.9708496Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9708713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9708779Z return mod(**inputs) 2025-09-07T08:27:34.9709035Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9709120Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9709370Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9709447Z outputs = layer_module( 2025-09-07T08:27:34.9709698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9709776Z outputs = self.rel_attn( 2025-09-07T08:27:34.9710027Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9710128Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9710132Z 2025-09-07T08:27:34.9710222Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9710305Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9710419Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9710620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9710690Z return mod(**inputs) 2025-09-07T08:27:34.9710953Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9711041Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9711304Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9711377Z outputs = layer_module( 2025-09-07T08:27:34.9711640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9711710Z outputs = self.rel_attn( 2025-09-07T08:27:34.9711970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9712049Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9712324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9712452Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9712456Z 2025-09-07T08:27:34.9712557Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9712752Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9712825Z return mod(**inputs) 2025-09-07T08:27:34.9713083Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9713175Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9713419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9713488Z outputs = layer_module( 2025-09-07T08:27:34.9713743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9713812Z outputs = self.rel_attn( 2025-09-07T08:27:34.9714079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9714194Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9714468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9714579Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9714582Z 2025-09-07T08:27:34.9714685Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9714886Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9714952Z return mod(**inputs) 2025-09-07T08:27:34.9715206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9715292Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9715550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9715626Z outputs = layer_module( 2025-09-07T08:27:34.9715862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9715940Z outputs = self.rel_attn( 2025-09-07T08:27:34.9716179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9716272Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9716532Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9716644Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9716647Z 2025-09-07T08:27:34.9716733Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9716809Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9716896Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9716995Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9717185Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9717260Z return mod(**inputs) 2025-09-07T08:27:34.9717504Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9717592Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9717834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9717901Z outputs = layer_module( 2025-09-07T08:27:34.9718168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9718234Z outputs = self.rel_attn( 2025-09-07T08:27:34.9718479Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9718576Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9718579Z 2025-09-07T08:27:34.9718686Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9718903Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9718969Z return mod(**inputs) 2025-09-07T08:27:34.9719218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9719300Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9719545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9719614Z outputs = layer_module( 2025-09-07T08:27:34.9719887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9719966Z outputs = self.rel_attn( 2025-09-07T08:27:34.9720236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9720357Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9720361Z 2025-09-07T08:27:34.9720441Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9720518Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9720629Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9720825Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9720899Z return mod(**inputs) 2025-09-07T08:27:34.9721147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9721238Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9721489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9721559Z outputs = layer_module( 2025-09-07T08:27:34.9721812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9721882Z outputs = self.rel_attn( 2025-09-07T08:27:34.9722137Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9722210Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9722475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9722614Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9722618Z 2025-09-07T08:27:34.9722719Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9722922Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9722992Z return mod(**inputs) 2025-09-07T08:27:34.9723247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9723341Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9723588Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9723665Z outputs = layer_module( 2025-09-07T08:27:34.9723913Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9724008Z outputs = self.rel_attn( 2025-09-07T08:27:34.9724258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9724392Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9724395Z 2025-09-07T08:27:34.9724488Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9724569Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9724682Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9724898Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9724969Z return mod(**inputs) 2025-09-07T08:27:34.9725238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9725324Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9725625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9725697Z outputs = layer_module( 2025-09-07T08:27:34.9725970Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9726075Z outputs = self.rel_attn( 2025-09-07T08:27:34.9726327Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9726413Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9726685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9726826Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9726830Z 2025-09-07T08:27:34.9726936Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9727233Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9727323Z return mod(**inputs) 2025-09-07T08:27:34.9727611Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9727715Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9728002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9728080Z outputs = layer_module( 2025-09-07T08:27:34.9728359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9728438Z outputs = self.rel_attn( 2025-09-07T08:27:34.9728748Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9728850Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9728855Z 2025-09-07T08:27:34.9728945Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9729024Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9729128Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9729335Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9729404Z return mod(**inputs) 2025-09-07T08:27:34.9729660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9729744Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9729993Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9730073Z outputs = layer_module( 2025-09-07T08:27:34.9730318Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9730424Z outputs = self.rel_attn( 2025-09-07T08:27:34.9730672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9730745Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9731018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9731140Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9731144Z 2025-09-07T08:27:34.9731272Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9731475Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9731552Z return mod(**inputs) 2025-09-07T08:27:34.9731808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9731894Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9732161Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9732231Z outputs = layer_module( 2025-09-07T08:27:34.9732503Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9732591Z outputs = self.rel_attn( 2025-09-07T08:27:34.9732842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9732940Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9733206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9733322Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9733326Z 2025-09-07T08:27:34.9733426Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9733629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9733697Z return mod(**inputs) 2025-09-07T08:27:34.9733950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9734042Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9734293Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9734368Z outputs = layer_module( 2025-09-07T08:27:34.9734615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9734685Z outputs = self.rel_attn( 2025-09-07T08:27:34.9734938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9735029Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9735300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9735410Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9735415Z 2025-09-07T08:27:34.9735496Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9735583Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9735662Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9735772Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9735969Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9736036Z return mod(**inputs) 2025-09-07T08:27:34.9736294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9736377Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9736654Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9736722Z outputs = layer_module( 2025-09-07T08:27:34.9736980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9737052Z outputs = self.rel_attn( 2025-09-07T08:27:34.9737321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9737429Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9737433Z 2025-09-07T08:27:34.9737534Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9737736Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9737802Z return mod(**inputs) 2025-09-07T08:27:34.9738052Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9738142Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9738414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9738508Z outputs = layer_module( 2025-09-07T08:27:34.9738757Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9738836Z outputs = self.rel_attn( 2025-09-07T08:27:34.9739085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9739183Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9739187Z 2025-09-07T08:27:34.9739275Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9739352Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9739464Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9739658Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9739725Z return mod(**inputs) 2025-09-07T08:27:34.9739987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9740072Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9740331Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9740405Z outputs = layer_module( 2025-09-07T08:27:34.9740676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9740758Z outputs = self.rel_attn( 2025-09-07T08:27:34.9741028Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9741116Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9741402Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9741554Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9741559Z 2025-09-07T08:27:34.9741670Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9741883Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9741963Z return mod(**inputs) 2025-09-07T08:27:34.9742237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9742347Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9742596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9742684Z outputs = layer_module( 2025-09-07T08:27:34.9742938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9743009Z outputs = self.rel_attn( 2025-09-07T08:27:34.9743266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9743401Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9743421Z 2025-09-07T08:27:34.9743517Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9743603Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9743712Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9743931Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9744004Z return mod(**inputs) 2025-09-07T08:27:34.9744280Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9744368Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9744655Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9744755Z outputs = layer_module( 2025-09-07T08:27:34.9745223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9745320Z outputs = self.rel_attn( 2025-09-07T08:27:34.9745590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9745670Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9745965Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9746108Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9746112Z 2025-09-07T08:27:34.9746232Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9746449Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9746531Z return mod(**inputs) 2025-09-07T08:27:34.9746808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9746900Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9747180Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9747254Z outputs = layer_module( 2025-09-07T08:27:34.9747531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9747608Z outputs = self.rel_attn( 2025-09-07T08:27:34.9747878Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9747994Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9747998Z 2025-09-07T08:27:34.9748087Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9748182Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9748294Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9748507Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9748588Z return mod(**inputs) 2025-09-07T08:27:34.9748858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9748953Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9749223Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9749353Z outputs = layer_module( 2025-09-07T08:27:34.9749621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9749698Z outputs = self.rel_attn( 2025-09-07T08:27:34.9749972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9750050Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9750368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9750503Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9750507Z 2025-09-07T08:27:34.9750616Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9750836Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9750912Z return mod(**inputs) 2025-09-07T08:27:34.9751196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9751281Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9751559Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9751661Z outputs = layer_module( 2025-09-07T08:27:34.9751931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9752009Z outputs = self.rel_attn( 2025-09-07T08:27:34.9752262Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9752357Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9752632Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9752746Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9752750Z 2025-09-07T08:27:34.9752859Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9753059Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9753139Z return mod(**inputs) 2025-09-07T08:27:34.9753416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9753508Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9753797Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9753868Z outputs = layer_module( 2025-09-07T08:27:34.9754136Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9754209Z outputs = self.rel_attn( 2025-09-07T08:27:34.9754478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9754571Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9754852Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9754974Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9754978Z 2025-09-07T08:27:34.9755069Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9755155Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9755232Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9755331Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9755530Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9755614Z return mod(**inputs) 2025-09-07T08:27:34.9755864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9755946Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9756188Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9756265Z outputs = layer_module( 2025-09-07T08:27:34.9756540Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9756627Z outputs = self.rel_attn( 2025-09-07T08:27:34.9756879Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9756985Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9756997Z 2025-09-07T08:27:34.9757107Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9757322Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9757403Z return mod(**inputs) 2025-09-07T08:27:34.9757691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9757807Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9758084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9758158Z outputs = layer_module( 2025-09-07T08:27:34.9758440Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9758514Z outputs = self.rel_attn( 2025-09-07T08:27:34.9758799Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9758900Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9758904Z 2025-09-07T08:27:34.9758982Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9759067Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9759171Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9759378Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9759445Z return mod(**inputs) 2025-09-07T08:27:34.9759697Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9759786Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9760037Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9760114Z outputs = layer_module( 2025-09-07T08:27:34.9760365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9760452Z outputs = self.rel_attn( 2025-09-07T08:27:34.9760712Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9760787Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9761077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9761223Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9761226Z 2025-09-07T08:27:34.9761344Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9761557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9761628Z return mod(**inputs) 2025-09-07T08:27:34.9761910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9762022Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9762301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9762378Z outputs = layer_module( 2025-09-07T08:27:34.9762663Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9762734Z outputs = self.rel_attn( 2025-09-07T08:27:34.9763013Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9763158Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9763161Z 2025-09-07T08:27:34.9763244Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9763333Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9763437Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9763640Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9763718Z return mod(**inputs) 2025-09-07T08:27:34.9764004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9764117Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9764386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9764461Z outputs = layer_module( 2025-09-07T08:27:34.9764736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9764813Z outputs = self.rel_attn( 2025-09-07T08:27:34.9765088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9765171Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9765466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9765606Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9765611Z 2025-09-07T08:27:34.9765723Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9765945Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9766017Z return mod(**inputs) 2025-09-07T08:27:34.9766294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9766383Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9766651Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9766734Z outputs = layer_module( 2025-09-07T08:27:34.9767000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9767163Z outputs = self.rel_attn( 2025-09-07T08:27:34.9767450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9767570Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9767574Z 2025-09-07T08:27:34.9767664Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9767753Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9767878Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9768094Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9768180Z return mod(**inputs) 2025-09-07T08:27:34.9768443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9768558Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9768830Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9768908Z outputs = layer_module( 2025-09-07T08:27:34.9769192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9769269Z outputs = self.rel_attn( 2025-09-07T08:27:34.9769557Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9769649Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9769934Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9770073Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9770079Z 2025-09-07T08:27:34.9770191Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9770408Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9770498Z return mod(**inputs) 2025-09-07T08:27:34.9770788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9770887Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9771157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9771242Z outputs = layer_module( 2025-09-07T08:27:34.9771509Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9771587Z outputs = self.rel_attn( 2025-09-07T08:27:34.9771887Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9771986Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9772283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9772408Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9772412Z 2025-09-07T08:27:34.9772532Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9772765Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9772838Z return mod(**inputs) 2025-09-07T08:27:34.9773132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9773223Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9773523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9773600Z outputs = layer_module( 2025-09-07T08:27:34.9773892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9773977Z outputs = self.rel_attn( 2025-09-07T08:27:34.9774281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9774386Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9774678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9774798Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9774809Z 2025-09-07T08:27:34.9774897Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9774985Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9775097Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9775208Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9775447Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9775530Z return mod(**inputs) 2025-09-07T08:27:34.9775804Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9775908Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9776219Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9776303Z outputs = layer_module( 2025-09-07T08:27:34.9776583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9776659Z outputs = self.rel_attn( 2025-09-07T08:27:34.9776937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9777045Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9777049Z 2025-09-07T08:27:34.9777169Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9777393Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9777477Z return mod(**inputs) 2025-09-07T08:27:34.9777733Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9777815Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9778070Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9778139Z outputs = layer_module( 2025-09-07T08:27:34.9778398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9778468Z outputs = self.rel_attn( 2025-09-07T08:27:34.9778727Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9778833Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9778838Z 2025-09-07T08:27:34.9778916Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9778999Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9779099Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9779291Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9779363Z return mod(**inputs) 2025-09-07T08:27:34.9779615Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9779705Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9779959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9780027Z outputs = layer_module( 2025-09-07T08:27:34.9780289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9780360Z outputs = self.rel_attn( 2025-09-07T08:27:34.9780617Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9780692Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9780962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9781098Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9781101Z 2025-09-07T08:27:34.9781205Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9781434Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9781504Z return mod(**inputs) 2025-09-07T08:27:34.9781822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9781907Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9782151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9782241Z outputs = layer_module( 2025-09-07T08:27:34.9782480Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9782554Z outputs = self.rel_attn( 2025-09-07T08:27:34.9782794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9782921Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9782933Z 2025-09-07T08:27:34.9783010Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9783086Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9783209Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9783400Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9783488Z return mod(**inputs) 2025-09-07T08:27:34.9783735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9783816Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9784068Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9784135Z outputs = layer_module( 2025-09-07T08:27:34.9784388Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9784457Z outputs = self.rel_attn( 2025-09-07T08:27:34.9784703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9784784Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9785049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9785181Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9785184Z 2025-09-07T08:27:34.9785285Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9785486Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9785550Z return mod(**inputs) 2025-09-07T08:27:34.9785798Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9785891Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9786140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9786216Z outputs = layer_module( 2025-09-07T08:27:34.9786460Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9786531Z outputs = self.rel_attn( 2025-09-07T08:27:34.9786794Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9786892Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9786896Z 2025-09-07T08:27:34.9786982Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9787059Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9787158Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9787388Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9787454Z return mod(**inputs) 2025-09-07T08:27:34.9787702Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9787784Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9788025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9788116Z outputs = layer_module( 2025-09-07T08:27:34.9788359Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9788434Z outputs = self.rel_attn( 2025-09-07T08:27:34.9788676Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9788756Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9789018Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9789134Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9789152Z 2025-09-07T08:27:34.9789260Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9789464Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9789538Z return mod(**inputs) 2025-09-07T08:27:34.9789780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9789863Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9790114Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9790182Z outputs = layer_module( 2025-09-07T08:27:34.9790439Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9790509Z outputs = self.rel_attn( 2025-09-07T08:27:34.9790764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9790855Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9791123Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9791243Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9791247Z 2025-09-07T08:27:34.9791349Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9791552Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9791619Z return mod(**inputs) 2025-09-07T08:27:34.9791866Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9791967Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9792209Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9792284Z outputs = layer_module( 2025-09-07T08:27:34.9792523Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9792597Z outputs = self.rel_attn( 2025-09-07T08:27:34.9792834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9792920Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9793187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9793313Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9793316Z 2025-09-07T08:27:34.9793402Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9793478Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9793556Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9793664Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9793854Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9793926Z return mod(**inputs) 2025-09-07T08:27:34.9794181Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9794264Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9794513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9794581Z outputs = layer_module( 2025-09-07T08:27:34.9794831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9794899Z outputs = self.rel_attn( 2025-09-07T08:27:34.9795154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9795272Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9795276Z 2025-09-07T08:27:34.9795373Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9795573Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9795639Z return mod(**inputs) 2025-09-07T08:27:34.9795886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9795967Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9796207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9796283Z outputs = layer_module( 2025-09-07T08:27:34.9796526Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9796602Z outputs = self.rel_attn( 2025-09-07T08:27:34.9796842Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9796942Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9796953Z 2025-09-07T08:27:34.9797032Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9797119Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9797225Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9797407Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9797471Z return mod(**inputs) 2025-09-07T08:27:34.9797717Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9797797Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9798047Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9798117Z outputs = layer_module( 2025-09-07T08:27:34.9798365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9798433Z outputs = self.rel_attn( 2025-09-07T08:27:34.9798673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9798752Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9799008Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9799161Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9799164Z 2025-09-07T08:27:34.9799264Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9799455Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9799531Z return mod(**inputs) 2025-09-07T08:27:34.9799770Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9799874Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9800115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9800192Z outputs = layer_module( 2025-09-07T08:27:34.9800437Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9800510Z outputs = self.rel_attn( 2025-09-07T08:27:34.9800761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9800908Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9800912Z 2025-09-07T08:27:34.9801019Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9801098Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9801201Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9801410Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9801477Z return mod(**inputs) 2025-09-07T08:27:34.9801739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9801824Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9802076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9802157Z outputs = layer_module( 2025-09-07T08:27:34.9802409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9802489Z outputs = self.rel_attn( 2025-09-07T08:27:34.9802751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9802840Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9803128Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9803265Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9803268Z 2025-09-07T08:27:34.9803385Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9803597Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9803678Z return mod(**inputs) 2025-09-07T08:27:34.9803949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9804041Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9804322Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9804396Z outputs = layer_module( 2025-09-07T08:27:34.9804674Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9804750Z outputs = self.rel_attn( 2025-09-07T08:27:34.9805023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9805133Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9805155Z 2025-09-07T08:27:34.9805244Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9805338Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9805447Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9805664Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9805739Z return mod(**inputs) 2025-09-07T08:27:34.9806022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9806122Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9806393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9806474Z outputs = layer_module( 2025-09-07T08:27:34.9806744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9806821Z outputs = self.rel_attn( 2025-09-07T08:27:34.9807362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9807456Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9807791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9807955Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9807960Z 2025-09-07T08:27:34.9808086Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9808306Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9808383Z return mod(**inputs) 2025-09-07T08:27:34.9808669Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9808763Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9809032Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9809099Z outputs = layer_module( 2025-09-07T08:27:34.9809340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9809421Z outputs = self.rel_attn( 2025-09-07T08:27:34.9809662Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9809758Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9810024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9810139Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9810149Z 2025-09-07T08:27:34.9810255Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9810457Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9810535Z return mod(**inputs) 2025-09-07T08:27:34.9810791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9810887Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9811143Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9811214Z outputs = layer_module( 2025-09-07T08:27:34.9811475Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9811549Z outputs = self.rel_attn( 2025-09-07T08:27:34.9811810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9811921Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9812193Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9812310Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9812315Z 2025-09-07T08:27:34.9812394Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9812477Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9812552Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9812669Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9812869Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9812935Z return mod(**inputs) 2025-09-07T08:27:34.9813182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9813264Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9813511Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9813578Z outputs = layer_module( 2025-09-07T08:27:34.9813834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9813927Z outputs = self.rel_attn( 2025-09-07T08:27:34.9814213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9814316Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9814319Z 2025-09-07T08:27:34.9814419Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9814608Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9814682Z return mod(**inputs) 2025-09-07T08:27:34.9814924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9815021Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9815266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9815344Z outputs = layer_module( 2025-09-07T08:27:34.9815591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9815662Z outputs = self.rel_attn( 2025-09-07T08:27:34.9815916Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9816014Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9816018Z 2025-09-07T08:27:34.9816104Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9816182Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9816284Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9816489Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9816554Z return mod(**inputs) 2025-09-07T08:27:34.9816822Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9816906Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9817148Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9817223Z outputs = layer_module( 2025-09-07T08:27:34.9817464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9817539Z outputs = self.rel_attn( 2025-09-07T08:27:34.9817780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9817868Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9818130Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9818259Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9818264Z 2025-09-07T08:27:34.9818372Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9818577Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9818653Z return mod(**inputs) 2025-09-07T08:27:34.9818893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9818975Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9819224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9819292Z outputs = layer_module( 2025-09-07T08:27:34.9819542Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9819622Z outputs = self.rel_attn( 2025-09-07T08:27:34.9819863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9820018Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9820022Z 2025-09-07T08:27:34.9820100Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9820185Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9820285Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9820489Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9820557Z return mod(**inputs) 2025-09-07T08:27:34.9820808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9820900Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9821154Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9821231Z outputs = layer_module( 2025-09-07T08:27:34.9821482Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9821553Z outputs = self.rel_attn( 2025-09-07T08:27:34.9821805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9821879Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9837435Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9837703Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9837710Z 2025-09-07T08:27:34.9837852Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9838083Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9838163Z return mod(**inputs) 2025-09-07T08:27:34.9838468Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9838574Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9838857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9838934Z outputs = layer_module( 2025-09-07T08:27:34.9839196Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9839284Z outputs = self.rel_attn( 2025-09-07T08:27:34.9839641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9839764Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9839768Z 2025-09-07T08:27:34.9839861Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9839950Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9840073Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9840347Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9840432Z return mod(**inputs) 2025-09-07T08:27:34.9840705Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9840810Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9841100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9841182Z outputs = layer_module( 2025-09-07T08:27:34.9841470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9841582Z outputs = self.rel_attn( 2025-09-07T08:27:34.9841900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9841986Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9842287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9842438Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9842442Z 2025-09-07T08:27:34.9842559Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9842789Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9842864Z return mod(**inputs) 2025-09-07T08:27:34.9843145Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9843247Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9843530Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9843619Z outputs = layer_module( 2025-09-07T08:27:34.9843903Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9843992Z outputs = self.rel_attn( 2025-09-07T08:27:34.9844272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9844377Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9844690Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9844821Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9844826Z 2025-09-07T08:27:34.9844951Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9845458Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9845547Z return mod(**inputs) 2025-09-07T08:27:34.9845831Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9845927Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9846215Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9846294Z outputs = layer_module( 2025-09-07T08:27:34.9846582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9846727Z outputs = self.rel_attn( 2025-09-07T08:27:34.9847007Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9847204Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9847519Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9847684Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9847689Z 2025-09-07T08:27:34.9847784Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9847874Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9847972Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9848090Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9848324Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9848404Z return mod(**inputs) 2025-09-07T08:27:34.9848671Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9848767Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9849038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9849143Z outputs = layer_module( 2025-09-07T08:27:34.9849393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9849471Z outputs = self.rel_attn( 2025-09-07T08:27:34.9849742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9849856Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9849859Z 2025-09-07T08:27:34.9849989Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9850212Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9850298Z return mod(**inputs) 2025-09-07T08:27:34.9850582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9850678Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9850966Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9851042Z outputs = layer_module( 2025-09-07T08:27:34.9851328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9851407Z outputs = self.rel_attn( 2025-09-07T08:27:34.9851693Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9851811Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9851815Z 2025-09-07T08:27:34.9851904Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9852004Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9852120Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9852351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9852426Z return mod(**inputs) 2025-09-07T08:27:34.9852704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9852805Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9853086Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9853170Z outputs = layer_module( 2025-09-07T08:27:34.9853447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9853567Z outputs = self.rel_attn( 2025-09-07T08:27:34.9853858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9853944Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9854258Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9854428Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9854433Z 2025-09-07T08:27:34.9854556Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9854772Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9854845Z return mod(**inputs) 2025-09-07T08:27:34.9855129Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9855224Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9855507Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9855602Z outputs = layer_module( 2025-09-07T08:27:34.9855896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9855984Z outputs = self.rel_attn( 2025-09-07T08:27:34.9856261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9856421Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9856426Z 2025-09-07T08:27:34.9856517Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9856612Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9856728Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9856961Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9857043Z return mod(**inputs) 2025-09-07T08:27:34.9857333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9857433Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9857718Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9857789Z outputs = layer_module( 2025-09-07T08:27:34.9858048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9858118Z outputs = self.rel_attn( 2025-09-07T08:27:34.9858424Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9858514Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9858781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9858920Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9858924Z 2025-09-07T08:27:34.9859027Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9859235Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9859303Z return mod(**inputs) 2025-09-07T08:27:34.9859597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9859681Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9859930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9860025Z outputs = layer_module( 2025-09-07T08:27:34.9860294Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9860376Z outputs = self.rel_attn( 2025-09-07T08:27:34.9860657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9860771Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9860775Z 2025-09-07T08:27:34.9860884Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9860972Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9861093Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9861317Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9861390Z return mod(**inputs) 2025-09-07T08:27:34.9861673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9861760Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9862064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9862162Z outputs = layer_module( 2025-09-07T08:27:34.9862470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9862540Z outputs = self.rel_attn( 2025-09-07T08:27:34.9862791Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9862874Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9863140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9863269Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9863275Z 2025-09-07T08:27:34.9863377Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9863574Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9863652Z return mod(**inputs) 2025-09-07T08:27:34.9863900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9863992Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9864241Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9864310Z outputs = layer_module( 2025-09-07T08:27:34.9864564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9864634Z outputs = self.rel_attn( 2025-09-07T08:27:34.9864892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9864984Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9865264Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9865380Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9865383Z 2025-09-07T08:27:34.9865485Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9865693Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9865762Z return mod(**inputs) 2025-09-07T08:27:34.9866019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9866100Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9866349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9866454Z outputs = layer_module( 2025-09-07T08:27:34.9866714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9866794Z outputs = self.rel_attn( 2025-09-07T08:27:34.9867054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9867167Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9867438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9867549Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9867553Z 2025-09-07T08:27:34.9867641Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9867719Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9867806Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9867909Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9868104Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9868192Z return mod(**inputs) 2025-09-07T08:27:34.9868458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9868546Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9868793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9868861Z outputs = layer_module( 2025-09-07T08:27:34.9869113Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9869181Z outputs = self.rel_attn( 2025-09-07T08:27:34.9869432Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9869533Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9869537Z 2025-09-07T08:27:34.9869645Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9869838Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9869905Z return mod(**inputs) 2025-09-07T08:27:34.9870163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9870245Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9870500Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9870568Z outputs = layer_module( 2025-09-07T08:27:34.9870814Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9870893Z outputs = self.rel_attn( 2025-09-07T08:27:34.9871141Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9871255Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9871260Z 2025-09-07T08:27:34.9871342Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9871423Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9871535Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9871744Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9871818Z return mod(**inputs) 2025-09-07T08:27:34.9872064Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9872154Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9872416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9872485Z outputs = layer_module( 2025-09-07T08:27:34.9872740Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9872812Z outputs = self.rel_attn( 2025-09-07T08:27:34.9873066Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9873155Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9873421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9873561Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9873564Z 2025-09-07T08:27:34.9873666Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9873873Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9873942Z return mod(**inputs) 2025-09-07T08:27:34.9874214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9874322Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9874575Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9874653Z outputs = layer_module( 2025-09-07T08:27:34.9874900Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9874976Z outputs = self.rel_attn( 2025-09-07T08:27:34.9875229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9875358Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9875363Z 2025-09-07T08:27:34.9875446Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9875522Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9875630Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9875818Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9875891Z return mod(**inputs) 2025-09-07T08:27:34.9876134Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9876223Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9876466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9876542Z outputs = layer_module( 2025-09-07T08:27:34.9876787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9876859Z outputs = self.rel_attn( 2025-09-07T08:27:34.9877108Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9877182Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9877452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9877592Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9877595Z 2025-09-07T08:27:34.9877699Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9877892Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9877956Z return mod(**inputs) 2025-09-07T08:27:34.9878202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9878299Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9878545Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9878613Z outputs = layer_module( 2025-09-07T08:27:34.9878860Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9878935Z outputs = self.rel_attn( 2025-09-07T08:27:34.9879187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9879296Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9879299Z 2025-09-07T08:27:34.9879376Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9879453Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9879558Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9879746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9879819Z return mod(**inputs) 2025-09-07T08:27:34.9880078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9880185Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9880431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9880502Z outputs = layer_module( 2025-09-07T08:27:34.9880758Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9880826Z outputs = self.rel_attn( 2025-09-07T08:27:34.9881078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9881151Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9881413Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9881545Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9881550Z 2025-09-07T08:27:34.9881651Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9881857Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9881925Z return mod(**inputs) 2025-09-07T08:27:34.9882186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9882271Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9882524Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9882602Z outputs = layer_module( 2025-09-07T08:27:34.9882857Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9882933Z outputs = self.rel_attn( 2025-09-07T08:27:34.9883186Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9883277Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9883560Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9883674Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9883678Z 2025-09-07T08:27:34.9883788Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9883988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9884055Z return mod(**inputs) 2025-09-07T08:27:34.9884316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9884420Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9884685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9884756Z outputs = layer_module( 2025-09-07T08:27:34.9885019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9885105Z outputs = self.rel_attn( 2025-09-07T08:27:34.9885357Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9885456Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9885735Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9885853Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9885857Z 2025-09-07T08:27:34.9885938Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9886020Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9886125Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9886232Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9886469Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9886536Z return mod(**inputs) 2025-09-07T08:27:34.9886793Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9886890Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9887274Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9887362Z outputs = layer_module( 2025-09-07T08:27:34.9887638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9887723Z outputs = self.rel_attn( 2025-09-07T08:27:34.9887997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9888105Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9888110Z 2025-09-07T08:27:34.9888230Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9888443Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9888519Z return mod(**inputs) 2025-09-07T08:27:34.9888764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9888845Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9889098Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9889165Z outputs = layer_module( 2025-09-07T08:27:34.9889416Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9889485Z outputs = self.rel_attn( 2025-09-07T08:27:34.9889728Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9889837Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9889840Z 2025-09-07T08:27:34.9889917Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9890004Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9890101Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9890300Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9890364Z return mod(**inputs) 2025-09-07T08:27:34.9890624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9890713Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9890962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9891036Z outputs = layer_module( 2025-09-07T08:27:34.9891305Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9891375Z outputs = self.rel_attn( 2025-09-07T08:27:34.9891625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9891696Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9891963Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9892092Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9892095Z 2025-09-07T08:27:34.9892203Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9892409Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9892489Z return mod(**inputs) 2025-09-07T08:27:34.9892736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9892817Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9893065Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9893135Z outputs = layer_module( 2025-09-07T08:27:34.9893380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9893458Z outputs = self.rel_attn( 2025-09-07T08:27:34.9893703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9893844Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9893848Z 2025-09-07T08:27:34.9893929Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9894008Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9894118Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9894315Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9894388Z return mod(**inputs) 2025-09-07T08:27:34.9894635Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9894727Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9894974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9895042Z outputs = layer_module( 2025-09-07T08:27:34.9895298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9895370Z outputs = self.rel_attn( 2025-09-07T08:27:34.9895623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9895697Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9895960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9896098Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9896101Z 2025-09-07T08:27:34.9896202Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9896406Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9896489Z return mod(**inputs) 2025-09-07T08:27:34.9896738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9896828Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9897077Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9897154Z outputs = layer_module( 2025-09-07T08:27:34.9897415Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9897491Z outputs = self.rel_attn( 2025-09-07T08:27:34.9897739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9897839Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9897844Z 2025-09-07T08:27:34.9897931Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9898009Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9898117Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9898326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9898410Z return mod(**inputs) 2025-09-07T08:27:34.9898673Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9898758Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9899014Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9899082Z outputs = layer_module( 2025-09-07T08:27:34.9899332Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9899409Z outputs = self.rel_attn( 2025-09-07T08:27:34.9899657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9899738Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9900004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9900134Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9900138Z 2025-09-07T08:27:34.9900241Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9900437Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9900512Z return mod(**inputs) 2025-09-07T08:27:34.9900761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9900854Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9901106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9901173Z outputs = layer_module( 2025-09-07T08:27:34.9901431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9901503Z outputs = self.rel_attn( 2025-09-07T08:27:34.9901761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9901848Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9902127Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9902238Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9902242Z 2025-09-07T08:27:34.9902342Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9902565Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9902635Z return mod(**inputs) 2025-09-07T08:27:34.9902893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9902978Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9903242Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9903322Z outputs = layer_module( 2025-09-07T08:27:34.9903571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9903648Z outputs = self.rel_attn( 2025-09-07T08:27:34.9903894Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9903982Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9904257Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9904382Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9904400Z 2025-09-07T08:27:34.9904487Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9904566Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9904649Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9904750Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9904943Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9905018Z return mod(**inputs) 2025-09-07T08:27:34.9905265Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9905354Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9905604Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9905671Z outputs = layer_module( 2025-09-07T08:27:34.9905925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9905994Z outputs = self.rel_attn( 2025-09-07T08:27:34.9906249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9906346Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9906349Z 2025-09-07T08:27:34.9906450Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9906661Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9906725Z return mod(**inputs) 2025-09-07T08:27:34.9906973Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9907056Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9907306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9907377Z outputs = layer_module( 2025-09-07T08:27:34.9907624Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9907701Z outputs = self.rel_attn( 2025-09-07T08:27:34.9907946Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9908054Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9908058Z 2025-09-07T08:27:34.9908137Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9908215Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9908344Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9908537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9908610Z return mod(**inputs) 2025-09-07T08:27:34.9908862Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9908945Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9909218Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9909288Z outputs = layer_module( 2025-09-07T08:27:34.9909546Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9909614Z outputs = self.rel_attn( 2025-09-07T08:27:34.9909869Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9909945Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9910213Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9910379Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9910397Z 2025-09-07T08:27:34.9910501Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9910706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9910772Z return mod(**inputs) 2025-09-07T08:27:34.9911022Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9911113Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9911369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9911444Z outputs = layer_module( 2025-09-07T08:27:34.9911685Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9911759Z outputs = self.rel_attn( 2025-09-07T08:27:34.9912000Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9912128Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9912133Z 2025-09-07T08:27:34.9912219Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9912296Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9912403Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9912605Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9912673Z return mod(**inputs) 2025-09-07T08:27:34.9912938Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9913022Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9913288Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9913360Z outputs = layer_module( 2025-09-07T08:27:34.9913612Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9913692Z outputs = self.rel_attn( 2025-09-07T08:27:34.9913951Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9914032Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9914298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9914454Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9914458Z 2025-09-07T08:27:34.9914560Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9914760Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9914837Z return mod(**inputs) 2025-09-07T08:27:34.9915092Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9915200Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9915451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9915521Z outputs = layer_module( 2025-09-07T08:27:34.9915776Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9915845Z outputs = self.rel_attn( 2025-09-07T08:27:34.9916102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9916202Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9916205Z 2025-09-07T08:27:34.9916310Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9916442Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9916543Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9916748Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9916814Z return mod(**inputs) 2025-09-07T08:27:34.9917073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9917155Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9917404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9917483Z outputs = layer_module( 2025-09-07T08:27:34.9917730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9917807Z outputs = self.rel_attn( 2025-09-07T08:27:34.9918054Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9918127Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9918396Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9918517Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9918520Z 2025-09-07T08:27:34.9918629Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9918822Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9918898Z return mod(**inputs) 2025-09-07T08:27:34.9919146Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9919231Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9919488Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9919559Z outputs = layer_module( 2025-09-07T08:27:34.9919815Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9919884Z outputs = self.rel_attn( 2025-09-07T08:27:34.9920163Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9920268Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9920568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9920714Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9920719Z 2025-09-07T08:27:34.9920829Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9921049Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9921128Z return mod(**inputs) 2025-09-07T08:27:34.9921431Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9921530Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9921812Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9921890Z outputs = layer_module( 2025-09-07T08:27:34.9922168Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9922243Z outputs = self.rel_attn( 2025-09-07T08:27:34.9922520Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9922633Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9922954Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9923077Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9923080Z 2025-09-07T08:27:34.9923168Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9923262Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9923345Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9923461Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9923686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9923758Z return mod(**inputs) 2025-09-07T08:27:34.9924046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9924137Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9924417Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9924491Z outputs = layer_module( 2025-09-07T08:27:34.9924781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9924855Z outputs = self.rel_attn( 2025-09-07T08:27:34.9925167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 416, in forward 2025-09-07T08:27:34.9925281Z q_head_h = torch.einsum("ibh,hnd->ibnd", h, self.q) 2025-09-07T08:27:34.9925285Z 2025-09-07T08:27:34.9925398Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9925627Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9925701Z return mod(**inputs) 2025-09-07T08:27:34.9925991Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9926094Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9926385Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9926468Z outputs = layer_module( 2025-09-07T08:27:34.9926753Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9926829Z outputs = self.rel_attn( 2025-09-07T08:27:34.9927222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 417, in forward 2025-09-07T08:27:34.9927368Z k_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.k) 2025-09-07T08:27:34.9927372Z 2025-09-07T08:27:34.9927477Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9927567Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9927687Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9927916Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9928002Z return mod(**inputs) 2025-09-07T08:27:34.9928308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9928420Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9928704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9928787Z outputs = layer_module( 2025-09-07T08:27:34.9929067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9929144Z outputs = self.rel_attn( 2025-09-07T08:27:34.9929438Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9929520Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9929848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 263, in rel_attn_core 2025-09-07T08:27:34.9929993Z ac = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_w_bias, k_head_h) 2025-09-07T08:27:34.9929996Z 2025-09-07T08:27:34.9930108Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9930326Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9930396Z return mod(**inputs) 2025-09-07T08:27:34.9930672Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9930763Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9931041Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9931115Z outputs = layer_module( 2025-09-07T08:27:34.9931386Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9931469Z outputs = self.rel_attn( 2025-09-07T08:27:34.9931739Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 422, in forward 2025-09-07T08:27:34.9931889Z k_head_r = torch.einsum("ibh,hnd->ibnd", r.type(self.r.dtype), self.r) 2025-09-07T08:27:34.9931893Z 2025-09-07T08:27:34.9931981Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9932066Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9932186Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9932401Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9932479Z return mod(**inputs) 2025-09-07T08:27:34.9932749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9932831Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9933087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9933154Z outputs = layer_module( 2025-09-07T08:27:34.9933407Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9933474Z outputs = self.rel_attn( 2025-09-07T08:27:34.9933729Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9933818Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9934088Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 266, in rel_attn_core 2025-09-07T08:27:34.9934227Z bd = torch.einsum("ibnd,jbnd->bnij", q_head + self.r_r_bias, k_head_r) 2025-09-07T08:27:34.9934232Z 2025-09-07T08:27:34.9934333Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9934562Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9934630Z return mod(**inputs) 2025-09-07T08:27:34.9934877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9934968Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9935221Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9935296Z outputs = layer_module( 2025-09-07T08:27:34.9935543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9935610Z outputs = self.rel_attn( 2025-09-07T08:27:34.9935877Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 418, in forward 2025-09-07T08:27:34.9935993Z v_head_h = torch.einsum("ibh,hnd->ibnd", cat, self.v) 2025-09-07T08:27:34.9935996Z 2025-09-07T08:27:34.9936085Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9936163Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9936272Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9936466Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9936531Z return mod(**inputs) 2025-09-07T08:27:34.9936787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9936871Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9937126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9937194Z outputs = layer_module( 2025-09-07T08:27:34.9937441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9937518Z outputs = self.rel_attn( 2025-09-07T08:27:34.9937768Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 425, in forward 2025-09-07T08:27:34.9937848Z attn_vec = self.rel_attn_core( 2025-09-07T08:27:34.9938109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 294, in rel_attn_core 2025-09-07T08:27:34.9938230Z attn_vec = torch.einsum("bnij,jbnd->ibnd", attn_prob, v_head_h) 2025-09-07T08:27:34.9938241Z 2025-09-07T08:27:34.9938343Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9938537Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9938614Z return mod(**inputs) 2025-09-07T08:27:34.9938865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9938956Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9939206Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9939273Z outputs = layer_module( 2025-09-07T08:27:34.9939527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9939595Z outputs = self.rel_attn( 2025-09-07T08:27:34.9939848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9939955Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9940224Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9940343Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9940347Z 2025-09-07T08:27:34.9940449Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9940668Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9940738Z return mod(**inputs) 2025-09-07T08:27:34.9940992Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1607, in forward 2025-09-07T08:27:34.9941075Z transformer_outputs = self.transformer( 2025-09-07T08:27:34.9941324Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1368, in forward 2025-09-07T08:27:34.9941401Z outputs = layer_module( 2025-09-07T08:27:34.9941647Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 494, in forward 2025-09-07T08:27:34.9941739Z outputs = self.rel_attn( 2025-09-07T08:27:34.9942002Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 440, in forward 2025-09-07T08:27:34.9942094Z output_h = self.post_attention(h, attn_vec) 2025-09-07T08:27:34.9942368Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 304, in post_attention 2025-09-07T08:27:34.9942477Z attn_out = torch.einsum("ibnd,hnd->ibh", attn_vec, self.o) 2025-09-07T08:27:34.9942480Z 2025-09-07T08:27:34.9942568Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9942648Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9942727Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9942811Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9942887Z cudagraph partition due to non gpu ops 2025-09-07T08:27:34.9942996Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:27:34.9943194Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:27:34.9943261Z return mod(**inputs) 2025-09-07T08:27:34.9943518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1630, in forward 2025-09-07T08:27:34.9943646Z loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1)) 2025-09-07T08:27:34.9943650Z 2025-09-07T08:28:08.6240825Z Compilation time (from dynamo_timed): 108.531817791 2025-09-07T08:28:08.6278244Z pass 2025-09-07T08:28:08.6284921Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:28:08.6286227Z TIMING: _recursive_pre_grad_passes:0.07086 _recursive_joint_graph_passes:1.40967 _recursive_post_grad_passes:0.25851 bmm_template_precompiling:9.07007 bmm_template_autotuning:2.10459 linear_unary_template_precompiling:1.47955 linear_unary_template_autotuning:0.64416 async_compile.wait:0.75523 code_gen:33.96179 inductor_compile:86.91692 backend_compile:102.35167 gc:0.00094 entire_frame_compile:108.53182 total_wall_time:108.53182 2025-09-07T08:28:08.6287730Z STATS: call_* op count: 820 | FakeTensorMode.__torch_dispatch__:99696 | FakeTensor.__torch_dispatch__:13337 | ProxyTorchDispatchMode.__torch_dispatch__:21786 2025-09-07T08:28:08.6288307Z Dynamo produced 1 graphs covering 820 ops with 0 graph breaks (0 unique) 2025-09-07T08:28:12.7816088Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you. 2025-09-07T08:28:12.7817439Z import pynvml # type: ignore[import] 2025-09-07T08:28:15.5525192Z /opt/conda/envs/py_3.9/lib/python3.9/site-packages/librosa/util/files.py:10: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-07T08:28:15.5526179Z from pkg_resources import resource_filename 2025-09-07T08:28:16.2192608Z 2025-09-07T08:28:17.5482576Z loading model: 0it [00:00, ?it/s] 2025-09-07T08:28:17.5482898Z loading model: 0it [00:01, ?it/s] 2025-09-07T08:28:17.5487700Z cpu eval YituTechConvBert 2025-09-07T08:28:18.4399078Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:28:18.7957020Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:28:19.1502328Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:28:44.1200694Z Autotune Choices Stats: 2025-09-07T08:28:44.1206409Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_0", "best_time": 0.027860500040333136} 2025-09-07T08:28:44.1214902Z AUTOTUNE linear_unary(512x768, 384x768, 384) 2025-09-07T08:28:44.1215359Z strides: [768, 1], [1, 0], [1] 2025-09-07T08:28:44.1215701Z dtypes: torch.bfloat16, torch.bfloat16, torch.bfloat16 2025-09-07T08:28:44.1216103Z cpp_CppMicroGemmAMX_0 0.0279 ms 100.0% 2025-09-07T08:28:44.1216453Z _linear_pointwise 0.0797 ms 34.9% 2025-09-07T08:28:44.1217012Z SingleProcess AUTOTUNE benchmarking takes 0.2659 seconds and 1.3286 seconds precompiling for 2 choices 2025-09-07T08:28:46.6761751Z Autotune Choices Stats: 2025-09-07T08:28:46.6762234Z {"num_choices": 2, "num_triton_choices": 0, "best_kernel": "cpp_CppMicroGemmAMX_4", "best_time": 0.01431399959983537} 2025-09-07T08:28:46.6782890Z AUTOTUNE bmm(1x512x384, 1x384x54) 2025-09-07T08:28:46.6783235Z strides: [196608, 384, 1], [384, 1, 384] 2025-09-07T08:28:46.6783512Z dtypes: torch.bfloat16, torch.bfloat16 2025-09-07T08:28:46.6783766Z cpp_CppMicroGemmAMX_4 0.0143 ms 100.0% 2025-09-07T08:28:46.6784000Z bmm 0.0512 ms 27.9% 2025-09-07T08:28:46.6784369Z SingleProcess AUTOTUNE benchmarking takes 0.2602 seconds and 1.3963 seconds precompiling for 2 choices 2025-09-07T08:28:55.7107635Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7108209Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7108565Z return mod(**inputs) 2025-09-07T08:28:55.7108995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7109445Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7109892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7110315Z hidden_states = self.encoder( 2025-09-07T08:28:55.7110782Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7111238Z layer_outputs = layer_module( 2025-09-07T08:28:55.7111638Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7112031Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7112491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7112956Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7113405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7113841Z self_outputs = self.self( 2025-09-07T08:28:55.7114607Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7115137Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7115675Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7116107Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7116248Z 2025-09-07T08:28:55.7116427Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7116798Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7117171Z return mod(**inputs) 2025-09-07T08:28:55.7117581Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7118024Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7118463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7118956Z hidden_states = self.encoder( 2025-09-07T08:28:55.7119414Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7119867Z layer_outputs = layer_module( 2025-09-07T08:28:55.7120247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7120645Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7121099Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7121548Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7121989Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7122438Z self_outputs = self.self( 2025-09-07T08:28:55.7122850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7123368Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7123899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7124356Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7124500Z 2025-09-07T08:28:55.7124622Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7125027Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7125387Z return mod(**inputs) 2025-09-07T08:28:55.7125801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7126264Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7126710Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7127486Z hidden_states = self.encoder( 2025-09-07T08:28:55.7127925Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7128394Z layer_outputs = layer_module( 2025-09-07T08:28:55.7128787Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7129190Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7129660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7130110Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7131355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7131797Z self_outputs = self.self( 2025-09-07T08:28:55.7132229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7132777Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7133362Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7133795Z x = self.pointwise(x) 2025-09-07T08:28:55.7133929Z 2025-09-07T08:28:55.7134026Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7134271Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7134508Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7134733Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7134964Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7135231Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7135629Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7136005Z return mod(**inputs) 2025-09-07T08:28:55.7136459Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7136935Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7137395Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7137842Z hidden_states = self.encoder( 2025-09-07T08:28:55.7138266Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7138696Z layer_outputs = layer_module( 2025-09-07T08:28:55.7139072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7139446Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7139858Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7140286Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7140745Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7141175Z self_outputs = self.self( 2025-09-07T08:28:55.7141591Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7142068Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7142264Z 2025-09-07T08:28:55.7142378Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7142768Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7143125Z return mod(**inputs) 2025-09-07T08:28:55.7143536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7143971Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7144406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7144812Z hidden_states = self.encoder( 2025-09-07T08:28:55.7145401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7145820Z layer_outputs = layer_module( 2025-09-07T08:28:55.7146185Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7146579Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7147069Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7147503Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7147931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7148358Z self_outputs = self.self( 2025-09-07T08:28:55.7148800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7149268Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7149455Z 2025-09-07T08:28:55.7149548Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7149761Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7150003Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7150371Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7150707Z return mod(**inputs) 2025-09-07T08:28:55.7151115Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7151534Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7152004Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7152433Z hidden_states = self.encoder( 2025-09-07T08:28:55.7152854Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7153273Z layer_outputs = layer_module( 2025-09-07T08:28:55.7153645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7154033Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7154463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7154896Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7155326Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7155755Z self_outputs = self.self( 2025-09-07T08:28:55.7156169Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7156639Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7156814Z 2025-09-07T08:28:55.7156910Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7157140Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7157617Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7157844Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7158071Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7158318Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7158706Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7159061Z return mod(**inputs) 2025-09-07T08:28:55.7159470Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7159901Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7160358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7160786Z hidden_states = self.encoder( 2025-09-07T08:28:55.7161208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7161632Z layer_outputs = layer_module( 2025-09-07T08:28:55.7162024Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7162416Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7162853Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7163295Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7163761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7164184Z self_outputs = self.self( 2025-09-07T08:28:55.7164603Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7165129Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7165657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7166097Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7166240Z 2025-09-07T08:28:55.7166353Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7166790Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7167256Z return mod(**inputs) 2025-09-07T08:28:55.7167686Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7168134Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7168592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7169026Z hidden_states = self.encoder( 2025-09-07T08:28:55.7169450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7169883Z layer_outputs = layer_module( 2025-09-07T08:28:55.7170253Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7170652Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7171091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7171535Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7171950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7172348Z self_outputs = self.self( 2025-09-07T08:28:55.7172737Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7173228Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7173721Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7174126Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7174267Z 2025-09-07T08:28:55.7174377Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7174746Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7175082Z return mod(**inputs) 2025-09-07T08:28:55.7175469Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7175880Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7176298Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7176705Z hidden_states = self.encoder( 2025-09-07T08:28:55.7177122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7177524Z layer_outputs = layer_module( 2025-09-07T08:28:55.7177871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7178237Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7178645Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7179077Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7179485Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7179889Z self_outputs = self.self( 2025-09-07T08:28:55.7180278Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7180768Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7181260Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7181675Z x = self.pointwise(x) 2025-09-07T08:28:55.7181817Z 2025-09-07T08:28:55.7181901Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7182123Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7182339Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7182557Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7182765Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7183008Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7183376Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7183711Z return mod(**inputs) 2025-09-07T08:28:55.7184090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7184510Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7184929Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7185338Z hidden_states = self.encoder( 2025-09-07T08:28:55.7185730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7186137Z layer_outputs = layer_module( 2025-09-07T08:28:55.7186534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7186916Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7187333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7187744Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7188159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7188561Z self_outputs = self.self( 2025-09-07T08:28:55.7188967Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7189397Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7189568Z 2025-09-07T08:28:55.7189674Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7190035Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7190364Z return mod(**inputs) 2025-09-07T08:28:55.7190742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7191179Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7191583Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7191985Z hidden_states = self.encoder( 2025-09-07T08:28:55.7192387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7192800Z layer_outputs = layer_module( 2025-09-07T08:28:55.7193151Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7193512Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7193910Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7194377Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7194779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7195165Z self_outputs = self.self( 2025-09-07T08:28:55.7195549Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7196017Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7196214Z 2025-09-07T08:28:55.7196304Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7196520Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7196750Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7197109Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7197434Z return mod(**inputs) 2025-09-07T08:28:55.7197808Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7198206Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7198620Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7199015Z hidden_states = self.encoder( 2025-09-07T08:28:55.7199393Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7199775Z layer_outputs = layer_module( 2025-09-07T08:28:55.7200104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7200497Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7200891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7201289Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7201682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7202077Z self_outputs = self.self( 2025-09-07T08:28:55.7202457Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7202893Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7203062Z 2025-09-07T08:28:55.7203152Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7203361Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7203579Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7203788Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7204000Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7204231Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7204594Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7204925Z return mod(**inputs) 2025-09-07T08:28:55.7205328Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7205744Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7206177Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7206615Z hidden_states = self.encoder( 2025-09-07T08:28:55.7207212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7207660Z layer_outputs = layer_module( 2025-09-07T08:28:55.7208046Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7208470Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7208871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7209278Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7209810Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7210247Z self_outputs = self.self( 2025-09-07T08:28:55.7210653Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7211135Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7211623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7212029Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7212160Z 2025-09-07T08:28:55.7212267Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7212632Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7212964Z return mod(**inputs) 2025-09-07T08:28:55.7213341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7213753Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7214157Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7214552Z hidden_states = self.encoder( 2025-09-07T08:28:55.7214944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7215341Z layer_outputs = layer_module( 2025-09-07T08:28:55.7215678Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7216039Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7216445Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7216851Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7217309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7217701Z self_outputs = self.self( 2025-09-07T08:28:55.7218087Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7218563Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7219048Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7219453Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7219582Z 2025-09-07T08:28:55.7219685Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7220063Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7220388Z return mod(**inputs) 2025-09-07T08:28:55.7220763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7221161Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7221601Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7221996Z hidden_states = self.encoder( 2025-09-07T08:28:55.7222383Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7222775Z layer_outputs = layer_module( 2025-09-07T08:28:55.7223109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7223469Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7223865Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7224295Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7224703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7225086Z self_outputs = self.self( 2025-09-07T08:28:55.7225477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7225943Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7226406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7226790Z x = self.pointwise(x) 2025-09-07T08:28:55.7226901Z 2025-09-07T08:28:55.7226980Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7227190Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7227394Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7227595Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7227788Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7228018Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7228372Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7228696Z return mod(**inputs) 2025-09-07T08:28:55.7229053Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7229445Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7229841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7230237Z hidden_states = self.encoder( 2025-09-07T08:28:55.7230626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7231016Z layer_outputs = layer_module( 2025-09-07T08:28:55.7231365Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7231728Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7232140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7232528Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7232922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7233303Z self_outputs = self.self( 2025-09-07T08:28:55.7233677Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7234135Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7234303Z 2025-09-07T08:28:55.7234407Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7234762Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7235083Z return mod(**inputs) 2025-09-07T08:28:55.7235481Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7235889Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7236296Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7236692Z hidden_states = self.encoder( 2025-09-07T08:28:55.7237091Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7237478Z layer_outputs = layer_module( 2025-09-07T08:28:55.7237803Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7238172Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7238578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7238976Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7239367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7239753Z self_outputs = self.self( 2025-09-07T08:28:55.7240140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7240597Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7240783Z 2025-09-07T08:28:55.7240874Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7241091Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7241329Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7241713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7242064Z return mod(**inputs) 2025-09-07T08:28:55.7242476Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7242885Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7243309Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7243713Z hidden_states = self.encoder( 2025-09-07T08:28:55.7244105Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7244527Z layer_outputs = layer_module( 2025-09-07T08:28:55.7244891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7245486Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7245935Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7246382Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7246825Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7247353Z self_outputs = self.self( 2025-09-07T08:28:55.7247788Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7248286Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7248530Z 2025-09-07T08:28:55.7248619Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7248824Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7249035Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7249247Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7249460Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7249693Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7250082Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7250405Z return mod(**inputs) 2025-09-07T08:28:55.7250779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7251187Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7251594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7252007Z hidden_states = self.encoder( 2025-09-07T08:28:55.7252394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7252819Z layer_outputs = layer_module( 2025-09-07T08:28:55.7253189Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7253548Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7253960Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7254377Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7254792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7255190Z self_outputs = self.self( 2025-09-07T08:28:55.7255578Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7256066Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7256555Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7256960Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7257090Z 2025-09-07T08:28:55.7257195Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7257551Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7257879Z return mod(**inputs) 2025-09-07T08:28:55.7258248Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7258652Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7259049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7259446Z hidden_states = self.encoder( 2025-09-07T08:28:55.7259838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7260233Z layer_outputs = layer_module( 2025-09-07T08:28:55.7260573Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7260932Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7261333Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7261739Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7262144Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7262557Z self_outputs = self.self( 2025-09-07T08:28:55.7262956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7263451Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7263950Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7264388Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7264519Z 2025-09-07T08:28:55.7264624Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7264988Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7265325Z return mod(**inputs) 2025-09-07T08:28:55.7265708Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7266128Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7266538Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7266960Z hidden_states = self.encoder( 2025-09-07T08:28:55.7267403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7267809Z layer_outputs = layer_module( 2025-09-07T08:28:55.7268156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7268525Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7268936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7269351Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7269765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7270157Z self_outputs = self.self( 2025-09-07T08:28:55.7270561Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7271060Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7271590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7271994Z x = self.pointwise(x) 2025-09-07T08:28:55.7272107Z 2025-09-07T08:28:55.7272189Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7272408Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7272624Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7272838Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7273043Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7273281Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7273647Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7273978Z return mod(**inputs) 2025-09-07T08:28:55.7274353Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7274770Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7275182Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7275583Z hidden_states = self.encoder( 2025-09-07T08:28:55.7275979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7276372Z layer_outputs = layer_module( 2025-09-07T08:28:55.7276724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7277133Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7277543Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7277958Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7278363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7278793Z self_outputs = self.self( 2025-09-07T08:28:55.7279179Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7279639Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7279820Z 2025-09-07T08:28:55.7279941Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7280309Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7280661Z return mod(**inputs) 2025-09-07T08:28:55.7281096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7281558Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7281987Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7282416Z hidden_states = self.encoder( 2025-09-07T08:28:55.7282809Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7283216Z layer_outputs = layer_module( 2025-09-07T08:28:55.7283568Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7283930Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7284340Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7284778Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7285214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7285648Z self_outputs = self.self( 2025-09-07T08:28:55.7286076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7286561Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7286763Z 2025-09-07T08:28:55.7286854Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7287251Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7287512Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7287918Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7288284Z return mod(**inputs) 2025-09-07T08:28:55.7288703Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7289114Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7289513Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7289916Z hidden_states = self.encoder( 2025-09-07T08:28:55.7290315Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7290715Z layer_outputs = layer_module( 2025-09-07T08:28:55.7291055Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7291413Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7291835Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7292237Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7292634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7293018Z self_outputs = self.self( 2025-09-07T08:28:55.7293420Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7293857Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7294025Z 2025-09-07T08:28:55.7294111Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7294326Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7294527Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7294738Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7294950Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7295181Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7295535Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7295884Z return mod(**inputs) 2025-09-07T08:28:55.7296283Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7296689Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7297085Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7297480Z hidden_states = self.encoder( 2025-09-07T08:28:55.7297867Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7298269Z layer_outputs = layer_module( 2025-09-07T08:28:55.7298625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7298990Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7299394Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7299801Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7300207Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7300603Z self_outputs = self.self( 2025-09-07T08:28:55.7300980Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7301460Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7301943Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7302355Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7302483Z 2025-09-07T08:28:55.7302593Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7302942Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7303274Z return mod(**inputs) 2025-09-07T08:28:55.7303649Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7304054Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7304450Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7304849Z hidden_states = self.encoder( 2025-09-07T08:28:55.7305237Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7305666Z layer_outputs = layer_module( 2025-09-07T08:28:55.7306019Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7306385Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7306805Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7307212Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7307629Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7308007Z self_outputs = self.self( 2025-09-07T08:28:55.7308381Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7308846Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7309313Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7309705Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7309832Z 2025-09-07T08:28:55.7309951Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7310313Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7310625Z return mod(**inputs) 2025-09-07T08:28:55.7310997Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7311398Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7311792Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7312188Z hidden_states = self.encoder( 2025-09-07T08:28:55.7312574Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7312968Z layer_outputs = layer_module( 2025-09-07T08:28:55.7313312Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7313666Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7314067Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7314471Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7314868Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7315252Z self_outputs = self.self( 2025-09-07T08:28:55.7315641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7316103Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7316571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7316954Z x = self.pointwise(x) 2025-09-07T08:28:55.7317063Z 2025-09-07T08:28:55.7317142Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7317352Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7317560Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7317763Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7317957Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7318184Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7318528Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7318845Z return mod(**inputs) 2025-09-07T08:28:55.7319208Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7319624Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7320023Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7320413Z hidden_states = self.encoder( 2025-09-07T08:28:55.7320795Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7321202Z layer_outputs = layer_module( 2025-09-07T08:28:55.7321552Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7321914Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7322316Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7322723Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7323120Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7323520Z self_outputs = self.self( 2025-09-07T08:28:55.7323932Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7324424Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7324601Z 2025-09-07T08:28:55.7324717Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7325080Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7325443Z return mod(**inputs) 2025-09-07T08:28:55.7325863Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7326304Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7326754Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7327283Z hidden_states = self.encoder( 2025-09-07T08:28:55.7327741Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7328186Z layer_outputs = layer_module( 2025-09-07T08:28:55.7328564Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7328918Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7329321Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7329726Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7330132Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7330526Z self_outputs = self.self( 2025-09-07T08:28:55.7330906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7331366Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7331560Z 2025-09-07T08:28:55.7331646Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7331868Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7332106Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7332476Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7332823Z return mod(**inputs) 2025-09-07T08:28:55.7333201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7333627Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7334025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7334434Z hidden_states = self.encoder( 2025-09-07T08:28:55.7334838Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7335235Z layer_outputs = layer_module( 2025-09-07T08:28:55.7335596Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7335954Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7336367Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7336768Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7337167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7337550Z self_outputs = self.self( 2025-09-07T08:28:55.7337931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7338385Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7338568Z 2025-09-07T08:28:55.7338657Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7338870Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7339074Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7339283Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7339492Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7339725Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7340079Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7340408Z return mod(**inputs) 2025-09-07T08:28:55.7340784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7341193Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7341599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7341990Z hidden_states = self.encoder( 2025-09-07T08:28:55.7342380Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7342774Z layer_outputs = layer_module( 2025-09-07T08:28:55.7343118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7343469Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7343876Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7344292Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7344704Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7345345Z self_outputs = self.self( 2025-09-07T08:28:55.7345742Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7346239Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7346736Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7347152Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7347289Z 2025-09-07T08:28:55.7347405Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7347767Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7348155Z return mod(**inputs) 2025-09-07T08:28:55.7348536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7348951Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7349358Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7349762Z hidden_states = self.encoder( 2025-09-07T08:28:55.7350214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7350625Z layer_outputs = layer_module( 2025-09-07T08:28:55.7350978Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7351342Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7351763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7352183Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7352626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7353058Z self_outputs = self.self( 2025-09-07T08:28:55.7353442Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7353948Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7354452Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7354874Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7355012Z 2025-09-07T08:28:55.7355131Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7355504Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7355857Z return mod(**inputs) 2025-09-07T08:28:55.7356236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7356649Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7357038Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7357427Z hidden_states = self.encoder( 2025-09-07T08:28:55.7357811Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7358200Z layer_outputs = layer_module( 2025-09-07T08:28:55.7358534Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7358880Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7359270Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7359666Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7360084Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7360475Z self_outputs = self.self( 2025-09-07T08:28:55.7360851Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7361330Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7361819Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7362222Z x = self.pointwise(x) 2025-09-07T08:28:55.7362355Z 2025-09-07T08:28:55.7362439Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7362660Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7362875Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7363089Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7363300Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7363531Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7363913Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7364248Z return mod(**inputs) 2025-09-07T08:28:55.7364633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7365041Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7365458Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7365897Z hidden_states = self.encoder( 2025-09-07T08:28:55.7366335Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7366792Z layer_outputs = layer_module( 2025-09-07T08:28:55.7367229Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7367642Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7368078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7368469Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7368855Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7369240Z self_outputs = self.self( 2025-09-07T08:28:55.7369623Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7370066Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7370236Z 2025-09-07T08:28:55.7370351Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7370704Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7371027Z return mod(**inputs) 2025-09-07T08:28:55.7371399Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7371802Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7372201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7372587Z hidden_states = self.encoder( 2025-09-07T08:28:55.7372974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7373368Z layer_outputs = layer_module( 2025-09-07T08:28:55.7373711Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7374062Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7374465Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7374871Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7375272Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7375665Z self_outputs = self.self( 2025-09-07T08:28:55.7376040Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7376487Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7376697Z 2025-09-07T08:28:55.7376779Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7376990Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7377231Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7377581Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7377906Z return mod(**inputs) 2025-09-07T08:28:55.7378300Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7378705Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7379101Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7379493Z hidden_states = self.encoder( 2025-09-07T08:28:55.7379882Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7380281Z layer_outputs = layer_module( 2025-09-07T08:28:55.7380626Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7381012Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7381441Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7381870Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7382276Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7382673Z self_outputs = self.self( 2025-09-07T08:28:55.7383049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7383489Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7383655Z 2025-09-07T08:28:55.7383743Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7383958Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7384162Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7384376Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7384586Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7384819Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7385177Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7385504Z return mod(**inputs) 2025-09-07T08:28:55.7385891Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7386289Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7386683Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7387063Z hidden_states = self.encoder( 2025-09-07T08:28:55.7387447Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7387832Z layer_outputs = layer_module( 2025-09-07T08:28:55.7388173Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7388515Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7388905Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7389301Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7389696Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7390082Z self_outputs = self.self( 2025-09-07T08:28:55.7390466Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7390941Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7391423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7391827Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7391957Z 2025-09-07T08:28:55.7392088Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7392445Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7392785Z return mod(**inputs) 2025-09-07T08:28:55.7393156Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7393557Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7393959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7394342Z hidden_states = self.encoder( 2025-09-07T08:28:55.7394743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7395140Z layer_outputs = layer_module( 2025-09-07T08:28:55.7395474Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7395814Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7396204Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7396597Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7396995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7397387Z self_outputs = self.self( 2025-09-07T08:28:55.7397761Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7398248Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7398714Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7399106Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7399234Z 2025-09-07T08:28:55.7399344Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7399686Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7400024Z return mod(**inputs) 2025-09-07T08:28:55.7400387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7400781Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7401167Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7401555Z hidden_states = self.encoder( 2025-09-07T08:28:55.7401937Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7402338Z layer_outputs = layer_module( 2025-09-07T08:28:55.7402691Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7403053Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7403464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7403877Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7404311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7404718Z self_outputs = self.self( 2025-09-07T08:28:55.7405104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7405595Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7406106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7406510Z x = self.pointwise(x) 2025-09-07T08:28:55.7406625Z 2025-09-07T08:28:55.7406717Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7407005Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7407242Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7407471Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7407704Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7407953Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7408351Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7408714Z return mod(**inputs) 2025-09-07T08:28:55.7409104Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7409532Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7409955Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7410362Z hidden_states = self.encoder( 2025-09-07T08:28:55.7410771Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7411182Z layer_outputs = layer_module( 2025-09-07T08:28:55.7411535Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7411911Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7412329Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7412755Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7413172Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7413575Z self_outputs = self.self( 2025-09-07T08:28:55.7413974Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7414432Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7414612Z 2025-09-07T08:28:55.7414728Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7415095Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7415432Z return mod(**inputs) 2025-09-07T08:28:55.7415820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7416244Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7416660Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7417068Z hidden_states = self.encoder( 2025-09-07T08:28:55.7417489Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7417893Z layer_outputs = layer_module( 2025-09-07T08:28:55.7418243Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7418611Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7419025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7419435Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7419841Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7420237Z self_outputs = self.self( 2025-09-07T08:28:55.7420627Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7421080Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7421266Z 2025-09-07T08:28:55.7421347Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7421563Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7421801Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7422163Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7422511Z return mod(**inputs) 2025-09-07T08:28:55.7422944Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7423380Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7423800Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7424203Z hidden_states = self.encoder( 2025-09-07T08:28:55.7424595Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7424993Z layer_outputs = layer_module( 2025-09-07T08:28:55.7425338Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7425695Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7426102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7426510Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7426919Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7427317Z self_outputs = self.self( 2025-09-07T08:28:55.7427698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7428136Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7428307Z 2025-09-07T08:28:55.7428387Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7428603Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7428804Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7429011Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7429219Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7429451Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7429803Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7430130Z return mod(**inputs) 2025-09-07T08:28:55.7430505Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7430911Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7431319Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7431717Z hidden_states = self.encoder( 2025-09-07T08:28:55.7432109Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7432508Z layer_outputs = layer_module( 2025-09-07T08:28:55.7432871Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7433237Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7433621Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7434016Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7434451Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7434838Z self_outputs = self.self( 2025-09-07T08:28:55.7435212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7435695Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7436184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7436588Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7436729Z 2025-09-07T08:28:55.7436839Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7437203Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7437541Z return mod(**inputs) 2025-09-07T08:28:55.7437906Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7438300Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7438698Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7439075Z hidden_states = self.encoder( 2025-09-07T08:28:55.7439455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7439850Z layer_outputs = layer_module( 2025-09-07T08:28:55.7440192Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7440556Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7440949Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7441364Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7441781Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7442185Z self_outputs = self.self( 2025-09-07T08:28:55.7442571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7443088Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7443619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7444065Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7444205Z 2025-09-07T08:28:55.7444328Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7444713Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7445181Z return mod(**inputs) 2025-09-07T08:28:55.7445586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7446064Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7446518Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7446994Z hidden_states = self.encoder( 2025-09-07T08:28:55.7447477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7447904Z layer_outputs = layer_module( 2025-09-07T08:28:55.7448279Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7448655Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7449090Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7449506Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7449928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7450337Z self_outputs = self.self( 2025-09-07T08:28:55.7450724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7451220Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7451738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7452146Z x = self.pointwise(x) 2025-09-07T08:28:55.7452287Z 2025-09-07T08:28:55.7452381Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7452596Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7452815Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7453031Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7453243Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7453478Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7453850Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7454186Z return mod(**inputs) 2025-09-07T08:28:55.7454571Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7454989Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7455406Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7455815Z hidden_states = self.encoder( 2025-09-07T08:28:55.7456214Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7456619Z layer_outputs = layer_module( 2025-09-07T08:28:55.7456972Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7457340Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7457756Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7458172Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7458586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7458981Z self_outputs = self.self( 2025-09-07T08:28:55.7459376Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7459832Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7460014Z 2025-09-07T08:28:55.7460131Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7460512Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7460954Z return mod(**inputs) 2025-09-07T08:28:55.7461363Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7461810Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7462249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7462673Z hidden_states = self.encoder( 2025-09-07T08:28:55.7463075Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7463479Z layer_outputs = layer_module( 2025-09-07T08:28:55.7463848Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7464231Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7464668Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7465114Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7465550Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7465978Z self_outputs = self.self( 2025-09-07T08:28:55.7466404Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7466906Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7467098Z 2025-09-07T08:28:55.7467182Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7467403Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7467648Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7468016Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7468332Z return mod(**inputs) 2025-09-07T08:28:55.7468699Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7469097Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7469493Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7469871Z hidden_states = self.encoder( 2025-09-07T08:28:55.7470252Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7470641Z layer_outputs = layer_module( 2025-09-07T08:28:55.7470984Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7471348Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7471765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7472183Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7472600Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7473002Z self_outputs = self.self( 2025-09-07T08:28:55.7473387Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7473841Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7474016Z 2025-09-07T08:28:55.7474103Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7474317Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7474530Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7474738Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7474949Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7475189Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7475564Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7475881Z return mod(**inputs) 2025-09-07T08:28:55.7476289Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7476694Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7477100Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7477491Z hidden_states = self.encoder( 2025-09-07T08:28:55.7477909Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7478308Z layer_outputs = layer_module( 2025-09-07T08:28:55.7478657Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7479016Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7479412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7479818Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7480222Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7480635Z self_outputs = self.self( 2025-09-07T08:28:55.7481049Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7481574Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7482106Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7482561Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7482702Z 2025-09-07T08:28:55.7482823Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7483216Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7483547Z return mod(**inputs) 2025-09-07T08:28:55.7483931Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7484351Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7484779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7485179Z hidden_states = self.encoder( 2025-09-07T08:28:55.7485586Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7486029Z layer_outputs = layer_module( 2025-09-07T08:28:55.7486405Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7486793Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7487307Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7487755Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7488202Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7488637Z self_outputs = self.self( 2025-09-07T08:28:55.7489025Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7489511Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7489999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7490411Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7490543Z 2025-09-07T08:28:55.7490655Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7491066Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7491399Z return mod(**inputs) 2025-09-07T08:28:55.7491783Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7492206Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7492625Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7493010Z hidden_states = self.encoder( 2025-09-07T08:28:55.7493400Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7493794Z layer_outputs = layer_module( 2025-09-07T08:28:55.7494140Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7494499Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7494893Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7495313Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7495730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7496114Z self_outputs = self.self( 2025-09-07T08:28:55.7496477Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7496940Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7497403Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7497786Z x = self.pointwise(x) 2025-09-07T08:28:55.7497898Z 2025-09-07T08:28:55.7497987Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7498198Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7498407Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7498624Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7498827Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7499045Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7499392Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7499709Z return mod(**inputs) 2025-09-07T08:28:55.7499964Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7500055Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7500308Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7500389Z hidden_states = self.encoder( 2025-09-07T08:28:55.7500643Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7500716Z layer_outputs = layer_module( 2025-09-07T08:28:55.7500940Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7501021Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7501287Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7501369Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7501634Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7501704Z self_outputs = self.self( 2025-09-07T08:28:55.7501959Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7502108Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7502112Z 2025-09-07T08:28:55.7502215Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7502420Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7502490Z return mod(**inputs) 2025-09-07T08:28:55.7502780Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7502871Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7503124Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7503201Z hidden_states = self.encoder( 2025-09-07T08:28:55.7503455Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7503534Z layer_outputs = layer_module( 2025-09-07T08:28:55.7503764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7503842Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7504122Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7504202Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7504462Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7504530Z self_outputs = self.self( 2025-09-07T08:28:55.7504784Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7504918Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7504921Z 2025-09-07T08:28:55.7505000Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7505085Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7505189Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7505389Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7505466Z return mod(**inputs) 2025-09-07T08:28:55.7505730Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7505820Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7506079Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7506160Z hidden_states = self.encoder( 2025-09-07T08:28:55.7506419Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7506492Z layer_outputs = layer_module( 2025-09-07T08:28:55.7506720Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7506797Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7507078Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7507158Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7507421Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7507497Z self_outputs = self.self( 2025-09-07T08:28:55.7507749Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7507885Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7507889Z 2025-09-07T08:28:55.7507966Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7508043Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7508125Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7508202Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7508287Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7508388Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7508596Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7508669Z return mod(**inputs) 2025-09-07T08:28:55.7508922Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7509009Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7509263Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7509344Z hidden_states = self.encoder( 2025-09-07T08:28:55.7509598Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7509685Z layer_outputs = layer_module( 2025-09-07T08:28:55.7509924Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7510000Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7510259Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7510338Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7510590Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7510668Z self_outputs = self.self( 2025-09-07T08:28:55.7510928Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7511094Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7511356Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7511441Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7511444Z 2025-09-07T08:28:55.7511547Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7511742Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7511816Z return mod(**inputs) 2025-09-07T08:28:55.7512076Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7512163Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7512423Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7512494Z hidden_states = self.encoder( 2025-09-07T08:28:55.7512764Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7512837Z layer_outputs = layer_module( 2025-09-07T08:28:55.7513071Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7513145Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7513398Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7513484Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7513743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7513841Z self_outputs = self.self( 2025-09-07T08:28:55.7514103Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7514266Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7514528Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7514623Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7514626Z 2025-09-07T08:28:55.7514736Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7514931Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7515004Z return mod(**inputs) 2025-09-07T08:28:55.7515269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7515350Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7515619Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7515709Z hidden_states = self.encoder( 2025-09-07T08:28:55.7516009Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7516079Z layer_outputs = layer_module( 2025-09-07T08:28:55.7516306Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7516384Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7516646Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7516736Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7516999Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7517077Z self_outputs = self.self( 2025-09-07T08:28:55.7517341Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7517496Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7517765Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7517837Z x = self.pointwise(x) 2025-09-07T08:28:55.7517840Z 2025-09-07T08:28:55.7517928Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7518007Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7518095Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7518171Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7518250Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7518362Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7518558Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7518635Z return mod(**inputs) 2025-09-07T08:28:55.7518899Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7518981Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7519249Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7519321Z hidden_states = self.encoder( 2025-09-07T08:28:55.7519592Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7519663Z layer_outputs = layer_module( 2025-09-07T08:28:55.7519896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7519981Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7520246Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7520336Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7520616Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7520687Z self_outputs = self.self( 2025-09-07T08:28:55.7520962Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7521085Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7521088Z 2025-09-07T08:28:55.7521200Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7521405Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7521482Z return mod(**inputs) 2025-09-07T08:28:55.7521801Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7521907Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7522212Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7522291Z hidden_states = self.encoder( 2025-09-07T08:28:55.7522580Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7522656Z layer_outputs = layer_module( 2025-09-07T08:28:55.7522892Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7522980Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7523251Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7523352Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7523613Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7523692Z self_outputs = self.self( 2025-09-07T08:28:55.7523956Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7524081Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7524084Z 2025-09-07T08:28:55.7524172Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7524250Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7524360Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7524557Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7524624Z return mod(**inputs) 2025-09-07T08:28:55.7524896Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7524980Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7525250Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7525325Z hidden_states = self.encoder( 2025-09-07T08:28:55.7525594Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7525674Z layer_outputs = layer_module( 2025-09-07T08:28:55.7525898Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7525984Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7526269Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7526359Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7526633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7526711Z self_outputs = self.self( 2025-09-07T08:28:55.7527118Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7527247Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7527251Z 2025-09-07T08:28:55.7527346Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7527431Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7527515Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7527606Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7527690Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7527809Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7528016Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7528102Z return mod(**inputs) 2025-09-07T08:28:55.7528390Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7528475Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7528743Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7528814Z hidden_states = self.encoder( 2025-09-07T08:28:55.7529073Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7529151Z layer_outputs = layer_module( 2025-09-07T08:28:55.7529369Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7529453Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7529716Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7529807Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7530072Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7530140Z self_outputs = self.self( 2025-09-07T08:28:55.7530412Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7530568Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7530834Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7530914Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7530917Z 2025-09-07T08:28:55.7531027Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7531230Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7531301Z return mod(**inputs) 2025-09-07T08:28:55.7531579Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7531663Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7531941Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7532015Z hidden_states = self.encoder( 2025-09-07T08:28:55.7532310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7532419Z layer_outputs = layer_module( 2025-09-07T08:28:55.7532656Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7532749Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7533033Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7533115Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7533401Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7533474Z self_outputs = self.self( 2025-09-07T08:28:55.7533744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7533901Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7534184Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7534260Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7534264Z 2025-09-07T08:28:55.7534384Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7534598Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7534664Z return mod(**inputs) 2025-09-07T08:28:55.7534926Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7535007Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7535261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7535340Z hidden_states = self.encoder( 2025-09-07T08:28:55.7535597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7535673Z layer_outputs = layer_module( 2025-09-07T08:28:55.7535889Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7535975Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7536247Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7536328Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7536599Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7536668Z self_outputs = self.self( 2025-09-07T08:28:55.7536936Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7537090Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7537355Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7537435Z x = self.pointwise(x) 2025-09-07T08:28:55.7537440Z 2025-09-07T08:28:55.7537521Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7537606Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7537686Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7537763Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7537848Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7537952Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7538155Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7538232Z return mod(**inputs) 2025-09-07T08:28:55.7538508Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7538594Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7538850Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7538929Z hidden_states = self.encoder( 2025-09-07T08:28:55.7539201Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7539278Z layer_outputs = layer_module( 2025-09-07T08:28:55.7539491Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7539566Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7539832Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7539913Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7540174Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7540272Z self_outputs = self.self( 2025-09-07T08:28:55.7540527Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7540669Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7540674Z 2025-09-07T08:28:55.7540773Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7540967Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7541033Z return mod(**inputs) 2025-09-07T08:28:55.7541311Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7541403Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7541682Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7541768Z hidden_states = self.encoder( 2025-09-07T08:28:55.7542051Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7542136Z layer_outputs = layer_module( 2025-09-07T08:28:55.7542373Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7542457Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7542750Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7542837Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7543125Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7543210Z self_outputs = self.self( 2025-09-07T08:28:55.7543463Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7543592Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7543596Z 2025-09-07T08:28:55.7543672Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7543754Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7543852Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7544046Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7544110Z return mod(**inputs) 2025-09-07T08:28:55.7544366Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7544473Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7544724Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7544804Z hidden_states = self.encoder( 2025-09-07T08:28:55.7545176Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7545256Z layer_outputs = layer_module( 2025-09-07T08:28:55.7545536Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7545616Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7545886Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7545968Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7546236Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7546309Z self_outputs = self.self( 2025-09-07T08:28:55.7546597Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7546743Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7546747Z 2025-09-07T08:28:55.7546828Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7546915Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7546993Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7547070Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7547156Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7547261Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7547461Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7547529Z return mod(**inputs) 2025-09-07T08:28:55.7547790Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7547880Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7548147Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7548228Z hidden_states = self.encoder( 2025-09-07T08:28:55.7548484Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7548555Z layer_outputs = layer_module( 2025-09-07T08:28:55.7548779Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7548858Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7549126Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7549209Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7549478Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7549550Z self_outputs = self.self( 2025-09-07T08:28:55.7549813Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7549978Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7550238Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7550321Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7550324Z 2025-09-07T08:28:55.7550426Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7550620Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7550718Z return mod(**inputs) 2025-09-07T08:28:55.7550979Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7551069Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7551349Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7551433Z hidden_states = self.encoder( 2025-09-07T08:28:55.7551744Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7551818Z layer_outputs = layer_module( 2025-09-07T08:28:55.7552050Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7552130Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7552409Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7552492Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7552777Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7552877Z self_outputs = self.self( 2025-09-07T08:28:55.7553159Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7553321Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7553582Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 282, in forward 2025-09-07T08:28:55.7553666Z x = self.depthwise(hidden_states) 2025-09-07T08:28:55.7553670Z 2025-09-07T08:28:55.7553774Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7553972Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7554047Z return mod(**inputs) 2025-09-07T08:28:55.7554310Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7554401Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7554661Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7554733Z hidden_states = self.encoder( 2025-09-07T08:28:55.7555005Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7555077Z layer_outputs = layer_module( 2025-09-07T08:28:55.7555301Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7555379Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7555641Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7555732Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7555995Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7556073Z self_outputs = self.self( 2025-09-07T08:28:55.7556339Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 347, in forward 2025-09-07T08:28:55.7556500Z mixed_key_conv_attn_layer = self.key_conv_attn_layer(hidden_states.transpose(1, 2)) 2025-09-07T08:28:55.7556763Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 283, in forward 2025-09-07T08:28:55.7556853Z x = self.pointwise(x) 2025-09-07T08:28:55.7556857Z 2025-09-07T08:28:55.7556942Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7557021Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7557104Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7557183Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7557259Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7557369Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7557585Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7557659Z return mod(**inputs) 2025-09-07T08:28:55.7557918Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7557999Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7558267Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7558341Z hidden_states = self.encoder( 2025-09-07T08:28:55.7558609Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7558696Z layer_outputs = layer_module( 2025-09-07T08:28:55.7558930Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7559019Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7559281Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7559372Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7559633Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7559709Z self_outputs = self.self( 2025-09-07T08:28:55.7559969Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 362, in forward 2025-09-07T08:28:55.7560090Z conv_kernel_layer = self.conv_kernel_layer(conv_attn_layer) 2025-09-07T08:28:55.7560093Z 2025-09-07T08:28:55.7560204Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7560403Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7560476Z return mod(**inputs) 2025-09-07T08:28:55.7560738Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7560822Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7561096Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7561170Z hidden_states = self.encoder( 2025-09-07T08:28:55.7561443Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7561518Z layer_outputs = layer_module( 2025-09-07T08:28:55.7561751Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7561834Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7562102Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7562194Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7562464Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7562543Z self_outputs = self.self( 2025-09-07T08:28:55.7562820Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 380, in forward 2025-09-07T08:28:55.7562968Z conv_out_layer = torch.matmul(conv_out_layer, conv_kernel_layer) 2025-09-07T08:28:55.7562980Z 2025-09-07T08:28:55.7563060Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7563139Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7563250Z cudagraph partition due to non gpu ops. Found from : 2025-09-07T08:28:55.7563451Z File "/var/lib/jenkins/workspace/benchmarks/dynamo/huggingface.py", line 533, in forward_pass 2025-09-07T08:28:55.7563520Z return mod(**inputs) 2025-09-07T08:28:55.7563826Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 925, in forward 2025-09-07T08:28:55.7563911Z generator_hidden_states = self.convbert( 2025-09-07T08:28:55.7564187Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 853, in forward 2025-09-07T08:28:55.7564259Z hidden_states = self.encoder( 2025-09-07T08:28:55.7564531Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 625, in forward 2025-09-07T08:28:55.7564607Z layer_outputs = layer_module( 2025-09-07T08:28:55.7564864Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/modeling_layers.py", line 94, in __call__ 2025-09-07T08:28:55.7564957Z return super().__call__(*args, **kwargs) 2025-09-07T08:28:55.7565261Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 561, in forward 2025-09-07T08:28:55.7565359Z self_attention_outputs = self.attention( 2025-09-07T08:28:55.7565640Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 464, in forward 2025-09-07T08:28:55.7565714Z self_outputs = self.self( 2025-09-07T08:28:55.7566006Z File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/models/convbert/modeling_convbert.py", line 405, in forward 2025-09-07T08:28:55.7566130Z context_layer = torch.cat([context_layer, conv_out], 2) 2025-09-07T08:28:55.7566133Z 2025-09-07T08:28:55.7566227Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7566311Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7566443Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7566527Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7566611Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7566701Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7566783Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7566939Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7567036Z cudagraph partition due to non gpu ops 2025-09-07T08:28:55.7567118Z cudagraph partition due to non gpu ops 2025-09-07T08:29:06.3233681Z Compilation time (from dynamo_timed): 45.855949753 2025-09-07T08:29:06.3273259Z pass 2025-09-07T08:29:06.3273683Z WARNING:common:Trying to call the empty_gpu_cache for device: cpu, which is not in list [cuda, xpu] 2025-09-07T08:29:06.3274894Z TIMING: _recursive_pre_grad_passes:0.06167 _recursive_joint_graph_passes:1.00046 _recursive_post_grad_passes:0.17948 linear_unary_template_precompiling:1.34361 linear_unary_template_autotuning:0.26356 bmm_template_precompiling:1.39823 bmm_template_autotuning:0.25748 async_compile.wait:0.65767 code_gen:9.53404 inductor_compile:30.843 backend_compile:40.80836 gc:0.00041 entire_frame_compile:45.85595 total_wall_time:45.85595 2025-09-07T08:29:06.3276235Z STATS: call_* op count: 636 | FakeTensorMode.__torch_dispatch__:51114 | FakeTensor.__torch_dispatch__:5254 | ProxyTorchDispatchMode.__torch_dispatch__:12594 2025-09-07T08:29:06.3276748Z Dynamo produced 1 graphs covering 636 ops with 0 graph breaks (0 unique) 2025-09-07T08:29:08.5232144Z accuracy pass_rate=95.35% 2025-09-07T08:29:08.5232464Z calls_captured gmean=0.00x mean=612.116x 2025-09-07T08:29:08.5232711Z unique_graphs gmean=0.00x mean=1.140x 2025-09-07T08:29:08.5232939Z graph_breaks gmean=0.00x mean=0.140x 2025-09-07T08:29:08.5241887Z unique_graph_breaks gmean=0.00x mean=0.047x 2025-09-07T08:29:08.5242138Z autograd_captures gmean=0.00x mean=0.000x 2025-09-07T08:29:08.5242394Z autograd_compiles gmean=0.00x mean=0.000x 2025-09-07T08:29:08.5244971Z cudagraph_skips gmean=0.00x mean=1.093x 2025-09-07T08:29:08.5245400Z compilation_latency mean=46.115 seconds 2025-09-07T08:29:09.4118970Z + python benchmarks/dynamo/check_accuracy.py --actual /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv --expected benchmarks/dynamo/ci_expected_accuracy/dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface_inference.csv 2025-09-07T08:29:09.6764587Z AlbertForMaskedLM PASS 2025-09-07T08:29:09.6764910Z AlbertForQuestionAnswering PASS 2025-09-07T08:29:09.6765161Z AllenaiLongformerBase PASS 2025-09-07T08:29:09.6765392Z BartForCausalLM PASS 2025-09-07T08:29:09.6779176Z BartForConditionalGeneration PASS 2025-09-07T08:29:09.6784866Z BertForMaskedLM PASS 2025-09-07T08:29:09.6789100Z BertForQuestionAnswering PASS 2025-09-07T08:29:09.6790676Z BlenderbotForCausalLM XFAIL 2025-09-07T08:29:09.6790924Z BlenderbotSmallForCausalLM PASS 2025-09-07T08:29:09.6791460Z BlenderbotSmallForConditionalGeneration PASS 2025-09-07T08:29:09.6791781Z CamemBert PASS 2025-09-07T08:29:09.6791989Z DebertaV2ForMaskedLM XFAIL 2025-09-07T08:29:09.6799611Z DebertaV2ForQuestionAnswering PASS 2025-09-07T08:29:09.6802518Z DistilBertForMaskedLM PASS 2025-09-07T08:29:09.6802803Z DistilBertForQuestionAnswering PASS 2025-09-07T08:29:09.6804536Z DistillGPT2 PASS 2025-09-07T08:29:09.6805694Z ElectraForCausalLM PASS 2025-09-07T08:29:09.6818074Z ElectraForQuestionAnswering PASS 2025-09-07T08:29:09.6823707Z GPT2ForSequenceClassification PASS 2025-09-07T08:29:09.6829283Z GoogleFnet PASS 2025-09-07T08:29:09.6834923Z LayoutLMForMaskedLM PASS 2025-09-07T08:29:09.6840051Z LayoutLMForSequenceClassification PASS 2025-09-07T08:29:09.6842112Z M2M100ForConditionalGeneration PASS 2025-09-07T08:29:09.6842572Z MBartForCausalLM PASS 2025-09-07T08:29:09.6842927Z MBartForConditionalGeneration PASS 2025-09-07T08:29:09.6843320Z MT5ForConditionalGeneration PASS 2025-09-07T08:29:09.6843697Z MegatronBertForCausalLM PASS 2025-09-07T08:29:09.6844080Z MegatronBertForQuestionAnswering PASS 2025-09-07T08:29:09.6844448Z MobileBertForMaskedLM PASS 2025-09-07T08:29:09.6847604Z MobileBertForQuestionAnswering PASS 2025-09-07T08:29:09.6858967Z OPTForCausalLM PASS 2025-09-07T08:29:09.6859418Z PLBartForCausalLM PASS 2025-09-07T08:29:09.6864121Z PLBartForConditionalGeneration PASS 2025-09-07T08:29:09.6864591Z PegasusForCausalLM PASS 2025-09-07T08:29:09.6872335Z PegasusForConditionalGeneration PASS 2025-09-07T08:29:09.6878737Z RobertaForCausalLM PASS 2025-09-07T08:29:09.6879056Z RobertaForQuestionAnswering PASS 2025-09-07T08:29:09.6883262Z T5ForConditionalGeneration PASS 2025-09-07T08:29:09.6883573Z T5Small PASS 2025-09-07T08:29:09.6883807Z TrOCRForCausalLM PASS 2025-09-07T08:29:09.6890070Z XGLMForCausalLM PASS 2025-09-07T08:29:09.6894999Z XLNetLMHeadModel PASS 2025-09-07T08:29:09.6897052Z YituTechConvBert PASS 2025-09-07T08:29:09.7367009Z + python benchmarks/dynamo/check_graph_breaks.py --actual /var/lib/jenkins/workspace/test/test-reports/inference_huggingface.csv --expected benchmarks/dynamo/ci_expected_accuracy/dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface_inference.csv 2025-09-07T08:29:09.9970087Z AlbertForMaskedLM PASS 2025-09-07T08:29:09.9974692Z AlbertForQuestionAnswering PASS 2025-09-07T08:29:09.9976340Z AllenaiLongformerBase PASS 2025-09-07T08:29:09.9977174Z BartForCausalLM PASS 2025-09-07T08:29:09.9980825Z BartForConditionalGeneration PASS 2025-09-07T08:29:09.9981172Z BertForMaskedLM PASS 2025-09-07T08:29:09.9985077Z BertForQuestionAnswering PASS 2025-09-07T08:29:09.9985433Z BlenderbotForCausalLM PASS 2025-09-07T08:29:09.9991059Z BlenderbotSmallForCausalLM PASS 2025-09-07T08:29:09.9995961Z BlenderbotSmallForConditionalGeneration PASS 2025-09-07T08:29:10.0001446Z CamemBert PASS 2025-09-07T08:29:10.0003130Z DebertaV2ForMaskedLM PASS 2025-09-07T08:29:10.0003583Z DebertaV2ForQuestionAnswering PASS 2025-09-07T08:29:10.0003975Z DistilBertForMaskedLM PASS 2025-09-07T08:29:10.0004332Z DistilBertForQuestionAnswering PASS 2025-09-07T08:29:10.0006291Z DistillGPT2 PASS 2025-09-07T08:29:10.0019443Z ElectraForCausalLM PASS 2025-09-07T08:29:10.0022101Z ElectraForQuestionAnswering PASS 2025-09-07T08:29:10.0027773Z GPT2ForSequenceClassification PASS 2025-09-07T08:29:10.0031875Z GoogleFnet PASS 2025-09-07T08:29:10.0034135Z LayoutLMForMaskedLM PASS 2025-09-07T08:29:10.0034713Z LayoutLMForSequenceClassification PASS 2025-09-07T08:29:10.0037311Z M2M100ForConditionalGeneration PASS 2025-09-07T08:29:10.0037679Z MBartForCausalLM PASS 2025-09-07T08:29:10.0040847Z MBartForConditionalGeneration PASS 2025-09-07T08:29:10.0049103Z MT5ForConditionalGeneration PASS 2025-09-07T08:29:10.0051334Z MegatronBertForCausalLM PASS 2025-09-07T08:29:10.0051757Z MegatronBertForQuestionAnswering PASS 2025-09-07T08:29:10.0058647Z MobileBertForMaskedLM PASS 2025-09-07T08:29:10.0059211Z MobileBertForQuestionAnswering PASS 2025-09-07T08:29:10.0059578Z OPTForCausalLM PASS 2025-09-07T08:29:10.0059844Z PLBartForCausalLM PASS 2025-09-07T08:29:10.0067994Z PLBartForConditionalGeneration PASS 2025-09-07T08:29:10.0075147Z PegasusForCausalLM PASS 2025-09-07T08:29:10.0080044Z PegasusForConditionalGeneration PASS 2025-09-07T08:29:10.0082222Z RobertaForCausalLM PASS 2025-09-07T08:29:10.0082501Z RobertaForQuestionAnswering PASS 2025-09-07T08:29:10.0082771Z T5ForConditionalGeneration PASS 2025-09-07T08:29:10.0083010Z T5Small PASS 2025-09-07T08:29:10.0085462Z TrOCRForCausalLM PASS 2025-09-07T08:29:10.0093166Z XGLMForCausalLM PASS_BUT_FLAKY 2025-09-07T08:29:10.0094837Z XLNetLMHeadModel PASS 2025-09-07T08:29:10.0100454Z YituTechConvBert PASS 2025-09-07T08:29:10.0608915Z + sccache_epilogue 2025-09-07T08:29:10.0615335Z + echo '::group::Sccache Compilation Log' 2025-09-07T08:29:10.0621349Z ##[group]Sccache Compilation Log 2025-09-07T08:29:10.0622818Z + echo '=================== sccache compilation log ===================' 2025-09-07T08:29:10.0623129Z =================== sccache compilation log =================== 2025-09-07T08:29:10.0623571Z + python /var/lib/jenkins/workspace/.ci/pytorch/print_sccache_log.py /var/lib/jenkins/sccache_error.log 2025-09-07T08:29:10.0837345Z + echo '=========== If your build fails, please take a look at the log above for possible reasons ===========' 2025-09-07T08:29:10.0837846Z =========== If your build fails, please take a look at the log above for possible reasons =========== 2025-09-07T08:29:10.0838172Z + sccache --show-stats 2025-09-07T08:29:10.0886683Z Compile requests 613 2025-09-07T08:29:10.0887197Z Compile requests executed 0 2025-09-07T08:29:10.0887438Z Cache hits 0 2025-09-07T08:29:10.0887666Z Cache misses 0 2025-09-07T08:29:10.0887903Z Cache hits rate - 2025-09-07T08:29:10.0888131Z Cache timeouts 0 2025-09-07T08:29:10.0888625Z Cache read errors 0 2025-09-07T08:29:10.0888824Z Forced recaches 0 2025-09-07T08:29:10.0889014Z Cache write errors 0 2025-09-07T08:29:10.0889211Z Cache errors 0 2025-09-07T08:29:10.0889417Z Compilations 0 2025-09-07T08:29:10.0889624Z Compilation failures 0 2025-09-07T08:29:10.0889823Z Non-cacheable compilations 0 2025-09-07T08:29:10.0890031Z Non-cacheable calls 41 2025-09-07T08:29:10.0890309Z Non-compilation calls 572 2025-09-07T08:29:10.0890522Z Unsupported compiler calls 0 2025-09-07T08:29:10.0890728Z Average cache write 0.000 s 2025-09-07T08:29:10.0890944Z Average compiler 0.000 s 2025-09-07T08:29:10.0891154Z Average cache read hit 0.000 s 2025-09-07T08:29:10.0891370Z Failed distributed compilations 0 2025-09-07T08:29:10.0891522Z 2025-09-07T08:29:10.0891599Z Non-cacheable reasons: 2025-09-07T08:29:10.0891794Z -E 41 2025-09-07T08:29:10.0891934Z 2025-09-07T08:29:10.0892095Z Cache location s3, name: ossci-compiler-cache-circleci-v2, prefix: / 2025-09-07T08:29:10.0892434Z Version (client) 0.10.0 2025-09-07T08:29:10.0892692Z + sccache --stop-server 2025-09-07T08:29:10.0909712Z Stopping sccache server... 2025-09-07T08:29:10.0920192Z Compile requests 613 2025-09-07T08:29:10.0920629Z Compile requests executed 0 2025-09-07T08:29:10.0920985Z Cache hits 0 2025-09-07T08:29:10.0921327Z Cache misses 0 2025-09-07T08:29:10.0921699Z Cache hits rate - 2025-09-07T08:29:10.0921926Z Cache timeouts 0 2025-09-07T08:29:10.0922167Z Cache read errors 0 2025-09-07T08:29:10.0922391Z Forced recaches 0 2025-09-07T08:29:10.0922614Z Cache write errors 0 2025-09-07T08:29:10.0922840Z Cache errors 0 2025-09-07T08:29:10.0923055Z Compilations 0 2025-09-07T08:29:10.0923274Z Compilation failures 0 2025-09-07T08:29:10.0923510Z Non-cacheable compilations 0 2025-09-07T08:29:10.0923744Z Non-cacheable calls 41 2025-09-07T08:29:10.0923970Z Non-compilation calls 572 2025-09-07T08:29:10.0924194Z Unsupported compiler calls 0 2025-09-07T08:29:10.0924436Z Average cache write 0.000 s 2025-09-07T08:29:10.0924673Z Average compiler 0.000 s 2025-09-07T08:29:10.0924903Z Average cache read hit 0.000 s 2025-09-07T08:29:10.0925134Z Failed distributed compilations 0 2025-09-07T08:29:10.0925283Z 2025-09-07T08:29:10.0925361Z Non-cacheable reasons: 2025-09-07T08:29:10.0925561Z -E 41 2025-09-07T08:29:10.0925705Z 2025-09-07T08:29:10.0925887Z Cache location s3, name: ossci-compiler-cache-circleci-v2, prefix: / 2025-09-07T08:29:10.0926211Z Version (client) 0.10.0 2025-09-07T08:29:10.0926475Z + echo ::endgroup:: 2025-09-07T08:29:10.0927250Z ##[endgroup] 2025-09-07T08:29:10.0927435Z + cleanup_workspace 2025-09-07T08:29:10.0927770Z + echo 'sudo may print the following warning message that can be ignored. The chown command will still run.' 2025-09-07T08:29:10.0928269Z sudo may print the following warning message that can be ignored. The chown command will still run. 2025-09-07T08:29:10.0928689Z + echo ' sudo: setrlimit(RLIMIT_STACK): Operation not permitted' 2025-09-07T08:29:10.0929004Z sudo: setrlimit(RLIMIT_STACK): Operation not permitted 2025-09-07T08:29:10.0929366Z + echo 'For more details refer to https://github.com/sudo-project/sudo/issues/42' 2025-09-07T08:29:10.0929769Z For more details refer to https://github.com/sudo-project/sudo/issues/42 2025-09-07T08:29:10.0930093Z + sudo chown -R 1000 /var/lib/jenkins/workspace 2025-09-07T08:29:10.5152983Z ##[group]Run pytorch/test-infra/.github/actions/upload-benchmark-results@main 2025-09-07T08:29:10.5153310Z with: 2025-09-07T08:29:10.5153515Z benchmark-results-dir: test/test-reports 2025-09-07T08:29:10.5153750Z dry-run: false 2025-09-07T08:29:10.5153941Z schema-version: v3 2025-09-07T08:29:10.5154331Z github-token: *** 2025-09-07T08:29:10.5154514Z env: 2025-09-07T08:29:10.5154689Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:10.5155016Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:10.5155359Z ##[endgroup] 2025-09-07T08:29:10.5168890Z ##[group]Run set -eux 2025-09-07T08:29:10.5169098Z set -eux 2025-09-07T08:29:10.5169247Z  2025-09-07T08:29:10.5169405Z if [[ -n "" ]]; then 2025-09-07T08:29:10.5169588Z  source "" 2025-09-07T08:29:10.5169752Z fi 2025-09-07T08:29:10.5169985Z python3 -mpip install boto3==1.35.33 psutil==7.0.0 pynvml==12.0.0 2025-09-07T08:29:10.5170300Z  2025-09-07T08:29:10.5170448Z DEVICE_NAME="" 2025-09-07T08:29:10.5170615Z DEVICE_TYPE="" 2025-09-07T08:29:10.5170777Z  2025-09-07T08:29:10.5170942Z if command -v nvidia-smi; then 2025-09-07T08:29:10.5171233Z  # NB: I'm using PyTorch here to get the device name, however, it needs to 2025-09-07T08:29:10.5171659Z  # install the correct version of PyTorch manually for now. Any PyTorch 2025-09-07T08:29:10.5171979Z  # version is fine, I just use 2.7.1 to satify PYPIDEP linter 2025-09-07T08:29:10.5172245Z  python3 -mpip install torch==2.7.1 2025-09-07T08:29:10.5172464Z elif command -v rocminfo; then 2025-09-07T08:29:10.5172721Z  # NB: Installing torch on ROCm runner with pip here causes CI to fail 2025-09-07T08:29:10.5173050Z  # with a memoryview is too large error only on MI300 runners. Is pip 2025-09-07T08:29:10.5173385Z  # version on ROCm runner there too old? As a workaround, let's use the 2025-09-07T08:29:10.5173679Z  # GPU device name coming from rocminfo instead 2025-09-07T08:29:10.5173906Z  DEVICE_NAME=rocm 2025-09-07T08:29:10.5174193Z  DEVICE_TYPE=$(rocminfo | grep "Marketing Name" | tail -n1 | awk -F':' '{print $2}' | xargs) 2025-09-07T08:29:10.5174490Z fi 2025-09-07T08:29:10.5174641Z  2025-09-07T08:29:10.5174832Z echo "DEVICE_NAME=$DEVICE_NAME" >> $GITHUB_ENV 2025-09-07T08:29:10.5175097Z echo "DEVICE_TYPE=$DEVICE_TYPE" >> $GITHUB_ENV 2025-09-07T08:29:10.5182770Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:10.5183011Z env: 2025-09-07T08:29:10.5183174Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:10.5183467Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:10.5183769Z ##[endgroup] 2025-09-07T08:29:10.5210534Z + [[ -n '' ]] 2025-09-07T08:29:10.5211017Z + python3 -mpip install boto3==1.35.33 psutil==7.0.0 pynvml==12.0.0 2025-09-07T08:29:10.6985786Z Defaulting to user installation because normal site-packages is not writeable 2025-09-07T08:29:11.4711646Z Collecting boto3==1.35.33 2025-09-07T08:29:11.4841298Z Downloading boto3-1.35.33-py3-none-any.whl (139 kB) 2025-09-07T08:29:11.7048265Z Collecting psutil==7.0.0 2025-09-07T08:29:11.7080214Z Downloading psutil-7.0.0-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (277 kB) 2025-09-07T08:29:11.7296819Z Collecting pynvml==12.0.0 2025-09-07T08:29:11.7323227Z Downloading pynvml-12.0.0-py3-none-any.whl (26 kB) 2025-09-07T08:29:12.5628844Z Collecting botocore<1.36.0,>=1.35.33 2025-09-07T08:29:12.5660046Z Downloading botocore-1.35.99-py3-none-any.whl (13.3 MB) 2025-09-07T08:29:12.6854794Z Collecting s3transfer<0.11.0,>=0.10.0 2025-09-07T08:29:12.6919518Z Downloading s3transfer-0.10.4-py3-none-any.whl (83 kB) 2025-09-07T08:29:12.6958293Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.33) (0.10.0) 2025-09-07T08:29:12.7274959Z Collecting nvidia-ml-py<13.0.0a0,>=12.0.0 2025-09-07T08:29:12.7303993Z Downloading nvidia_ml_py-12.575.51-py3-none-any.whl (47 kB) 2025-09-07T08:29:12.7384115Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.25.10) 2025-09-07T08:29:12.7392256Z Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.33->boto3==1.35.33) (2.8.1) 2025-09-07T08:29:12.8740615Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.36.0,>=1.35.33->boto3==1.35.33) (1.15.0) 2025-09-07T08:29:12.9753310Z Installing collected packages: botocore, s3transfer, nvidia-ml-py, pynvml, psutil, boto3 2025-09-07T08:29:13.3229291Z Attempting uninstall: nvidia-ml-py 2025-09-07T08:29:13.3234853Z Found existing installation: nvidia-ml-py 11.525.84 2025-09-07T08:29:13.3239462Z Uninstalling nvidia-ml-py-11.525.84: 2025-09-07T08:29:13.3372351Z Successfully uninstalled nvidia-ml-py-11.525.84 2025-09-07T08:29:13.3897682Z Attempting uninstall: psutil 2025-09-07T08:29:13.3898980Z Found existing installation: psutil 5.9.8 2025-09-07T08:29:13.3944410Z Uninstalling psutil-5.9.8: 2025-09-07T08:29:13.3949853Z Successfully uninstalled psutil-5.9.8 2025-09-07T08:29:13.5263708Z Successfully installed boto3-1.35.33 botocore-1.35.99 nvidia-ml-py-12.575.51 psutil-7.0.0 pynvml-12.0.0 s3transfer-0.10.4 2025-09-07T08:29:13.6309314Z + DEVICE_NAME= 2025-09-07T08:29:13.6311261Z + DEVICE_TYPE= 2025-09-07T08:29:13.6311624Z + command -v nvidia-smi 2025-09-07T08:29:13.6316566Z + command -v rocminfo 2025-09-07T08:29:13.6316956Z + echo DEVICE_NAME= 2025-09-07T08:29:13.6319615Z + echo DEVICE_TYPE= 2025-09-07T08:29:13.6334296Z ##[group]Run set -eux 2025-09-07T08:29:13.6334485Z set -eux 2025-09-07T08:29:13.6334716Z  2025-09-07T08:29:13.6334895Z if [[ -z "${GITHUB_TOKEN}" ]]; then 2025-09-07T08:29:13.6335129Z  echo "Missing github-token input" 2025-09-07T08:29:13.6335337Z  exit 1 2025-09-07T08:29:13.6335490Z fi 2025-09-07T08:29:13.6339824Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:13.6340065Z env: 2025-09-07T08:29:13.6340223Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:13.6340514Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:13.6340820Z DEVICE_NAME: 2025-09-07T08:29:13.6340975Z DEVICE_TYPE: 2025-09-07T08:29:13.6341373Z GITHUB_TOKEN: *** 2025-09-07T08:29:13.6341542Z ##[endgroup] 2025-09-07T08:29:13.6371147Z + [[ -z *** ]] 2025-09-07T08:29:13.6406271Z ##[group]Run pytorch/test-infra/.github/actions/get-workflow-job-id@main 2025-09-07T08:29:13.6406624Z with: 2025-09-07T08:29:13.6407335Z github-token: *** 2025-09-07T08:29:13.6407661Z env: 2025-09-07T08:29:13.6407898Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:13.6408270Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:13.6408659Z DEVICE_NAME: 2025-09-07T08:29:13.6408850Z DEVICE_TYPE: 2025-09-07T08:29:13.6409018Z ##[endgroup] 2025-09-07T08:29:13.6418811Z ##[group]Run set -eux 2025-09-07T08:29:13.6419015Z set -eux 2025-09-07T08:29:13.6419184Z  2025-09-07T08:29:13.6419501Z python3 "${GITHUB_ACTION_PATH}/../../scripts/get_workflow_job_id.py" "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2025-09-07T08:29:13.6423642Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:13.6423899Z env: 2025-09-07T08:29:13.6424066Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:13.6424381Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:13.6424696Z DEVICE_NAME: 2025-09-07T08:29:13.6440450Z DEVICE_TYPE: 2025-09-07T08:29:13.6440920Z GITHUB_TOKEN: *** 2025-09-07T08:29:13.6441293Z ##[endgroup] 2025-09-07T08:29:13.6468695Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/get-workflow-job-id/../../scripts/get_workflow_job_id.py 17525270809 i-06b49f47ba3e131d7 2025-09-07T08:29:14.0439519Z setting job-id=49775559413 2025-09-07T08:29:14.0440348Z setting job-name=nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T08:29:14.0537162Z ##[group]Run set -eux 2025-09-07T08:29:14.0537378Z set -eux 2025-09-07T08:29:14.0537534Z  2025-09-07T08:29:14.0537700Z if [[ -n "" ]]; then 2025-09-07T08:29:14.0537895Z  source "" 2025-09-07T08:29:14.0538059Z fi 2025-09-07T08:29:14.0538207Z  2025-09-07T08:29:14.0538474Z python3 "${GITHUB_ACTION_PATH}/../../scripts/benchmarks/gather_metadata.py" \ 2025-09-07T08:29:14.0538796Z  --schema-version "${SCHEMA_VERSION}" \ 2025-09-07T08:29:14.0539041Z  --repo "${REPO}" \ 2025-09-07T08:29:14.0539246Z  --head-branch "${HEAD_BRANCH}" \ 2025-09-07T08:29:14.0539465Z  --head-sha "${HEAD_SHA}" \ 2025-09-07T08:29:14.0539689Z  --workflow-id "${WORKFLOW_RUN_ID}" \ 2025-09-07T08:29:14.0539920Z  --run-attempt "${RUN_ATTEMPT}" \ 2025-09-07T08:29:14.0540211Z  --job-id "${JOB_ID}" \ 2025-09-07T08:29:14.0540422Z  --job-name "${JOB_NAME}" 2025-09-07T08:29:14.0545280Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:14.0545542Z env: 2025-09-07T08:29:14.0545705Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:14.0546020Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:14.0546331Z DEVICE_NAME: 2025-09-07T08:29:14.0546509Z DEVICE_TYPE: 2025-09-07T08:29:14.0546686Z SCHEMA_VERSION: v3 2025-09-07T08:29:14.0546874Z REPO: pytorch/pytorch 2025-09-07T08:29:14.0547075Z HEAD_BRANCH: refs/heads/main 2025-09-07T08:29:14.0547328Z HEAD_SHA: 93fb23d6fae7c4e82c4239a1033e522088742634 2025-09-07T08:29:14.0547572Z WORKFLOW_RUN_ID: 17525270809 2025-09-07T08:29:14.0547759Z RUN_ATTEMPT: 1 2025-09-07T08:29:14.0547932Z JOB_ID: 49775559413 2025-09-07T08:29:14.0548361Z JOB_NAME: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T08:29:14.0548771Z ##[endgroup] 2025-09-07T08:29:14.0571523Z + [[ -n '' ]] 2025-09-07T08:29:14.0573087Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/benchmarks/gather_metadata.py --schema-version v3 --repo pytorch/pytorch --head-branch refs/heads/main --head-sha 93fb23d6fae7c4e82c4239a1033e522088742634 --workflow-id 17525270809 --run-attempt 1 --job-id 49775559413 --job-name 'nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx)' 2025-09-07T08:29:14.0826157Z ##[group]Run set -eux 2025-09-07T08:29:14.0826396Z set -eux 2025-09-07T08:29:14.0826550Z  2025-09-07T08:29:14.0826712Z if [[ -n "" ]]; then 2025-09-07T08:29:14.0826899Z  source "" 2025-09-07T08:29:14.0827062Z fi 2025-09-07T08:29:14.0827204Z  2025-09-07T08:29:14.0827460Z python3 "${GITHUB_ACTION_PATH}/../../scripts/benchmarks/gather_runners_info.py" 2025-09-07T08:29:14.0831999Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:14.0832250Z env: 2025-09-07T08:29:14.0832408Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:14.0832715Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:14.0833038Z DEVICE_NAME: 2025-09-07T08:29:14.0833200Z DEVICE_TYPE: 2025-09-07T08:29:14.0833355Z ##[endgroup] 2025-09-07T08:29:14.0857399Z + [[ -n '' ]] 2025-09-07T08:29:14.0858301Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/benchmarks/gather_runners_info.py 2025-09-07T08:29:14.1185408Z INFO:root:Fail to import torch to get the device name 2025-09-07T08:29:14.1275545Z ##[group]Run set -eux 2025-09-07T08:29:14.1275753Z set -eux 2025-09-07T08:29:14.1275919Z  2025-09-07T08:29:14.1276101Z # TODO (huydhn): Implement this part 2025-09-07T08:29:14.1276369Z echo "dependencies={}" >> "${GITHUB_OUTPUT}" 2025-09-07T08:29:14.1280925Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:14.1281173Z env: 2025-09-07T08:29:14.1281338Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:14.1281640Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:14.1281947Z DEVICE_NAME: 2025-09-07T08:29:14.1282122Z DEVICE_TYPE: 2025-09-07T08:29:14.1282291Z ##[endgroup] 2025-09-07T08:29:14.1306448Z + echo 'dependencies={}' 2025-09-07T08:29:14.1329983Z ##[group]Run set -eux 2025-09-07T08:29:14.1330207Z set -eux 2025-09-07T08:29:14.1330381Z  2025-09-07T08:29:14.1330534Z if [[ -n "" ]]; then 2025-09-07T08:29:14.1330725Z  source "" 2025-09-07T08:29:14.1330892Z fi 2025-09-07T08:29:14.1331043Z  2025-09-07T08:29:14.1331225Z if [[ ! -d "${BENCHMARK_RESULTS_DIR}" ]]; then 2025-09-07T08:29:14.1331579Z  echo "${BENCHMARK_RESULTS_DIR} does not exist, skipping" 2025-09-07T08:29:14.1331882Z  # We don't want the job to fail if the directory doesn't exist 2025-09-07T08:29:14.1332130Z  exit 0 2025-09-07T08:29:14.1332277Z fi 2025-09-07T08:29:14.1332426Z  2025-09-07T08:29:14.1332596Z if [[ "${DRY_RUN}" == "true" ]]; then 2025-09-07T08:29:14.1332910Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-09-07T08:29:14.1333269Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-09-07T08:29:14.1333544Z  --metadata "${BENCHMARK_METADATA}" \ 2025-09-07T08:29:14.1333787Z  --runners "${RUNNER_INFO}" \ 2025-09-07T08:29:14.1334021Z  --dependencies "${DEPENDENCIES}" \ 2025-09-07T08:29:14.1334243Z  --dry-run 2025-09-07T08:29:14.1334412Z else 2025-09-07T08:29:14.1334664Z  python3 "${GITHUB_ACTION_PATH}/../../scripts/upload_benchmark_results.py" \ 2025-09-07T08:29:14.1335003Z  --benchmark-results-dir "${BENCHMARK_RESULTS_DIR}" \ 2025-09-07T08:29:14.1335273Z  --metadata "${BENCHMARK_METADATA}" \ 2025-09-07T08:29:14.1335495Z  --runners "${RUNNER_INFO}" \ 2025-09-07T08:29:14.1335725Z  --dependencies "${DEPENDENCIES}" 2025-09-07T08:29:14.1335937Z fi 2025-09-07T08:29:14.1339944Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:14.1340198Z env: 2025-09-07T08:29:14.1340355Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:14.1340665Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:14.1341104Z DEVICE_NAME: 2025-09-07T08:29:14.1341282Z DEVICE_TYPE: 2025-09-07T08:29:14.1341462Z BENCHMARK_RESULTS_DIR: test/test-reports 2025-09-07T08:29:14.1341681Z DRY_RUN: false 2025-09-07T08:29:14.1342562Z BENCHMARK_METADATA: {"timestamp": 1757233754, "schema_version": "v3", "name": "nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx)", "repo": "pytorch/pytorch", "head_branch": "refs/heads/main", "head_sha": "93fb23d6fae7c4e82c4239a1033e522088742634", "workflow_id": 17525270809, "run_attempt": 1, "job_id": 49775559413} 2025-09-07T08:29:14.1343647Z RUNNER_INFO: [{"cpu_info": "x86_64", "cpu_count": 32, "avail_mem_in_gb": 123, "extra_info": {"hostname": "ip-10-0-56-51.ec2.internal"}, "name": "", "type": ""}] 2025-09-07T08:29:14.1344032Z DEPENDENCIES: {} 2025-09-07T08:29:14.1344205Z ##[endgroup] 2025-09-07T08:29:14.1364924Z + [[ -n '' ]] 2025-09-07T08:29:14.1365387Z + [[ ! -d test/test-reports ]] 2025-09-07T08:29:14.1365673Z + [[ false == \t\r\u\e ]] 2025-09-07T08:29:14.1367567Z + python3 /home/ec2-user/actions-runner/_work/_actions/pytorch/test-infra/main/.github/actions/upload-benchmark-results/../../scripts/upload_benchmark_results.py --benchmark-results-dir test/test-reports --metadata '{"timestamp": 1757233754, "schema_version": "v3", "name": "nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx)", "repo": "pytorch/pytorch", "head_branch": "refs/heads/main", "head_sha": "93fb23d6fae7c4e82c4239a1033e522088742634", "workflow_id": 17525270809, "run_attempt": 1, "job_id": 49775559413}' --runners '[{"cpu_info": "x86_64", "cpu_count": 32, "avail_mem_in_gb": 123, "extra_info": {"hostname": "ip-10-0-56-51.ec2.internal"}, "name": "", "type": ""}]' --dependencies '{}' 2025-09-07T08:29:14.2528411Z INFO:root:Upload test/test-reports/inference_huggingface.json to s3://ossci-benchmarks/v3/pytorch/pytorch/17525270809/49775559413/inference_huggingface.json 2025-09-07T08:29:14.2829361Z INFO:botocore.credentials:Found credentials from IAM Role: gh-ci-github-action-runners-runner-role 2025-09-07T08:29:14.4803317Z ##[group]Run cat test/**/*_toprint.log || true 2025-09-07T08:29:14.4803622Z cat test/**/*_toprint.log || true 2025-09-07T08:29:14.4808751Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:14.4809137Z env: 2025-09-07T08:29:14.4809315Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:14.4809609Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:14.4809902Z DEVICE_NAME: 2025-09-07T08:29:14.4810064Z DEVICE_TYPE: 2025-09-07T08:29:14.4810219Z ##[endgroup] 2025-09-07T08:29:14.4881233Z cat: 'test/**/*_toprint.log': No such file or directory 2025-09-07T08:29:14.4902290Z ##[group]Run kill "$MONITOR_SCRIPT_PID" 2025-09-07T08:29:14.4902547Z kill "$MONITOR_SCRIPT_PID" 2025-09-07T08:29:14.4906354Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:14.4906609Z env: 2025-09-07T08:29:14.4906777Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:14.4907079Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:14.4907386Z DEVICE_NAME: 2025-09-07T08:29:14.4907550Z DEVICE_TYPE: 2025-09-07T08:29:14.4907726Z MONITOR_SCRIPT_PID: 48467 2025-09-07T08:29:14.4907906Z ##[endgroup] 2025-09-07T08:29:14.5003567Z Prepare all required actions 2025-09-07T08:29:14.5004021Z Getting action download info 2025-09-07T08:29:14.6365834Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2025-09-07T08:29:14.8405315Z Download action repository 'actions/upload-artifact@v4' (SHA:ea165f8d65b6e75b540449e92b4886f43607fa02) 2025-09-07T08:29:15.1854640Z ##[group]Run ./.github/actions/upload-test-artifacts 2025-09-07T08:29:15.1854874Z with: 2025-09-07T08:29:15.1855198Z file-suffix: test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T08:29:15.1855567Z s3-bucket: gha-artifacts 2025-09-07T08:29:15.1855735Z env: 2025-09-07T08:29:15.1855888Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:15.1856173Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:15.1856474Z DEVICE_NAME: 2025-09-07T08:29:15.1856635Z DEVICE_TYPE: 2025-09-07T08:29:15.1856826Z ##[endgroup] 2025-09-07T08:29:15.1872519Z ##[group]Run # Remove any previous test jsons if they exist 2025-09-07T08:29:15.1872813Z # Remove any previous test jsons if they exist 2025-09-07T08:29:15.1873057Z rm -f test-jsons-*.zip 2025-09-07T08:29:15.1873334Z zip -r "test-jsons-${FILE_SUFFIX}.zip" test/test-reports -i '*.json' 2025-09-07T08:29:15.1877729Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:15.1877976Z env: 2025-09-07T08:29:15.1878140Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:15.1878492Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:15.1878887Z DEVICE_NAME: 2025-09-07T08:29:15.1879055Z DEVICE_TYPE: 2025-09-07T08:29:15.1879383Z FILE_SUFFIX: test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T08:29:15.1879740Z ##[endgroup] 2025-09-07T08:29:15.2077448Z adding: test/test-reports/inference_huggingface.json (deflated 99%) 2025-09-07T08:29:15.2096322Z ##[group]Run # Remove any previous test reports if they exist 2025-09-07T08:29:15.2096644Z # Remove any previous test reports if they exist 2025-09-07T08:29:15.2096902Z rm -f test-reports-*.zip 2025-09-07T08:29:15.2097214Z zip -r "test-reports-${FILE_SUFFIX}.zip" test/test-reports -i '*.xml' -i '*.csv' 2025-09-07T08:29:15.2101521Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:15.2101763Z env: 2025-09-07T08:29:15.2101926Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:15.2102223Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:15.2102534Z DEVICE_NAME: 2025-09-07T08:29:15.2102695Z DEVICE_TYPE: 2025-09-07T08:29:15.2103019Z FILE_SUFFIX: test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T08:29:15.2103369Z ##[endgroup] 2025-09-07T08:29:15.2155290Z adding: test/test-reports/inference_huggingface.csv (deflated 68%) 2025-09-07T08:29:15.2158005Z adding: test/test-reports/inference_huggingface_graph_breaks.csv (deflated 85%) 2025-09-07T08:29:15.2158583Z adding: test/test-reports/inference_huggingface_graph_break_deduped.csv (deflated 63%) 2025-09-07T08:29:15.2173069Z ##[group]Run # Remove any previous usage logs if they exist 2025-09-07T08:29:15.2173388Z # Remove any previous usage logs if they exist 2025-09-07T08:29:15.2173626Z rm -f logs-*.zip 2025-09-07T08:29:15.2173861Z zip "logs-${FILE_SUFFIX}.zip" 'usage_log.txt' || true 2025-09-07T08:29:15.2174166Z zip -r "logs-${FILE_SUFFIX}.zip" test/test-reports -i '*.log' || true 2025-09-07T08:29:15.2177955Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:15.2178199Z env: 2025-09-07T08:29:15.2178357Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:15.2178645Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:15.2178960Z DEVICE_NAME: 2025-09-07T08:29:15.2179247Z DEVICE_TYPE: 2025-09-07T08:29:15.2179574Z FILE_SUFFIX: test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T08:29:15.2179927Z ##[endgroup] 2025-09-07T08:29:15.2266770Z adding: usage_log.txt (deflated 96%) 2025-09-07T08:29:15.2279205Z 2025-09-07T08:29:15.2282881Z zip error: Nothing to do! (logs-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip) 2025-09-07T08:29:15.2324826Z ##[group]Run # Remove any previous debugging artifacts if they exist 2025-09-07T08:29:15.2325186Z # Remove any previous debugging artifacts if they exist 2025-09-07T08:29:15.2325441Z rm -f debug-*.zip 2025-09-07T08:29:15.2325641Z if [ -d 'test/debug' ]; then 2025-09-07T08:29:15.2325881Z  zip -r "debug-${FILE_SUFFIX}.zip" test/debug 2025-09-07T08:29:15.2326107Z fi 2025-09-07T08:29:15.2330534Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:15.2330781Z env: 2025-09-07T08:29:15.2330946Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:15.2331246Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:15.2331561Z DEVICE_NAME: 2025-09-07T08:29:15.2331720Z DEVICE_TYPE: 2025-09-07T08:29:15.2332052Z FILE_SUFFIX: test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413 2025-09-07T08:29:15.2332418Z ##[endgroup] 2025-09-07T08:29:15.2393119Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-09-07T08:29:15.2393337Z with: 2025-09-07T08:29:15.2393640Z s3-bucket: gha-artifacts 2025-09-07T08:29:15.2393860Z s3-prefix: pytorch/pytorch/17525270809/1/artifact 2025-09-07T08:29:15.2394086Z retention-days: 14 2025-09-07T08:29:15.2394250Z if-no-files-found: warn 2025-09-07T08:29:15.2394434Z path: test-jsons-*.zip 2025-09-07T08:29:15.2394610Z name: artifact 2025-09-07T08:29:15.2394777Z region: us-east-1 2025-09-07T08:29:15.2394928Z env: 2025-09-07T08:29:15.2395086Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:15.2395373Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:15.2395670Z DEVICE_NAME: 2025-09-07T08:29:15.2395818Z DEVICE_TYPE: 2025-09-07T08:29:15.2395969Z ##[endgroup] 2025-09-07T08:29:15.5282073Z NOTE: s3-prefix specified, ignoring name parameter 2025-09-07T08:29:15.5282619Z With the provided path, there will be 1 file uploaded 2025-09-07T08:29:15.5283039Z Uploading to s3 prefix: pytorch/pytorch/17525270809/1/artifact 2025-09-07T08:29:15.5310978Z Starting upload of test-jsons-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:15.6484267Z Finished upload of test-jsons-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:15.6625589Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-09-07T08:29:15.6625902Z with: 2025-09-07T08:29:15.6626094Z s3-bucket: gha-artifacts 2025-09-07T08:29:15.6626329Z s3-prefix: pytorch/pytorch/17525270809/1/artifact 2025-09-07T08:29:15.6626570Z retention-days: 14 2025-09-07T08:29:15.6626753Z if-no-files-found: error 2025-09-07T08:29:15.6626955Z path: test-reports-*.zip 2025-09-07T08:29:15.6627145Z name: artifact 2025-09-07T08:29:15.6627322Z region: us-east-1 2025-09-07T08:29:15.6627488Z env: 2025-09-07T08:29:15.6627656Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:15.6627947Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:15.6628250Z DEVICE_NAME: 2025-09-07T08:29:15.6628415Z DEVICE_TYPE: 2025-09-07T08:29:15.6628573Z ##[endgroup] 2025-09-07T08:29:15.9099864Z NOTE: s3-prefix specified, ignoring name parameter 2025-09-07T08:29:15.9100218Z With the provided path, there will be 1 file uploaded 2025-09-07T08:29:15.9100513Z Uploading to s3 prefix: pytorch/pytorch/17525270809/1/artifact 2025-09-07T08:29:15.9130355Z Starting upload of test-reports-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:16.0287622Z Finished upload of test-reports-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:16.0436671Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-09-07T08:29:16.0436906Z with: 2025-09-07T08:29:16.0437079Z s3-bucket: gha-artifacts 2025-09-07T08:29:16.0437308Z s3-prefix: pytorch/pytorch/17525270809/1/artifact 2025-09-07T08:29:16.0437549Z retention-days: 14 2025-09-07T08:29:16.0437723Z if-no-files-found: ignore 2025-09-07T08:29:16.0437931Z path: logs-*.zip 2025-09-07T08:29:16.0438095Z name: artifact 2025-09-07T08:29:16.0438261Z region: us-east-1 2025-09-07T08:29:16.0438417Z env: 2025-09-07T08:29:16.0438572Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:16.0438877Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:16.0439198Z DEVICE_NAME: 2025-09-07T08:29:16.0439366Z DEVICE_TYPE: 2025-09-07T08:29:16.0439529Z ##[endgroup] 2025-09-07T08:29:16.2992121Z NOTE: s3-prefix specified, ignoring name parameter 2025-09-07T08:29:16.2992680Z With the provided path, there will be 1 file uploaded 2025-09-07T08:29:16.2993015Z Uploading to s3 prefix: pytorch/pytorch/17525270809/1/artifact 2025-09-07T08:29:16.3022081Z Starting upload of logs-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:16.4311139Z Finished upload of logs-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:16.4455514Z ##[group]Run seemethere/upload-artifact-s3@v5 2025-09-07T08:29:16.4455742Z with: 2025-09-07T08:29:16.4455909Z s3-bucket: gha-artifacts 2025-09-07T08:29:16.4456149Z s3-prefix: pytorch/pytorch/17525270809/1/artifact 2025-09-07T08:29:16.4456400Z retention-days: 14 2025-09-07T08:29:16.4456607Z if-no-files-found: ignore 2025-09-07T08:29:16.4456815Z path: debug-*.zip 2025-09-07T08:29:16.4456984Z name: artifact 2025-09-07T08:29:16.4457148Z region: us-east-1 2025-09-07T08:29:16.4457306Z env: 2025-09-07T08:29:16.4457464Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:16.4457773Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:16.4458091Z DEVICE_NAME: 2025-09-07T08:29:16.4458240Z DEVICE_TYPE: 2025-09-07T08:29:16.4458392Z ##[endgroup] 2025-09-07T08:29:16.7200144Z No files were found with the provided path: debug-*.zip. No artifacts will be uploaded. 2025-09-07T08:29:16.7374197Z ##[group]Run # shellcheck disable=SC2156 2025-09-07T08:29:16.7374477Z # shellcheck disable=SC2156 2025-09-07T08:29:16.7374860Z find . -iname "core.[1-9]*" -exec docker exec "${DOCKER_CONTAINER_ID}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2025-09-07T08:29:16.7380235Z shell: /usr/bin/bash -e {0} 2025-09-07T08:29:16.7380426Z env: 2025-09-07T08:29:16.7380668Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:16.7380965Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:16.7381290Z DEVICE_NAME: 2025-09-07T08:29:16.7381455Z DEVICE_TYPE: 2025-09-07T08:29:16.7381615Z ##[endgroup] 2025-09-07T08:29:16.9112621Z Prepare all required actions 2025-09-07T08:29:16.9112973Z Getting action download info 2025-09-07T08:29:17.0327390Z ##[group]Run ./.github/actions/upload-utilization-stats 2025-09-07T08:29:17.0327729Z with: 2025-09-07T08:29:17.0327937Z job_id: 49775559413 2025-09-07T08:29:17.0328438Z job_name: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T08:29:17.0328983Z workflow_name: inductor-nightly 2025-09-07T08:29:17.0329226Z workflow_run_id: 17525270809 2025-09-07T08:29:17.0329441Z workflow_attempt: 1 2025-09-07T08:29:17.0329641Z env: 2025-09-07T08:29:17.0329825Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:17.0330203Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:17.0330572Z DEVICE_NAME: 2025-09-07T08:29:17.0330766Z DEVICE_TYPE: 2025-09-07T08:29:17.0330954Z ##[endgroup] 2025-09-07T08:29:17.0362738Z ##[group]Run echo "workflow_id: 17525270809" 2025-09-07T08:29:17.0363037Z echo "workflow_id: 17525270809" 2025-09-07T08:29:17.0363272Z echo "workflow_attempt: 1" 2025-09-07T08:29:17.0363524Z echo "workflow_Name: inductor-nightly" 2025-09-07T08:29:17.0363770Z echo "job_id: 49775559413" 2025-09-07T08:29:17.0364237Z echo "job_name: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx)" 2025-09-07T08:29:17.0364724Z echo "artifact_prefix: " 2025-09-07T08:29:17.0364962Z python3 --version 2025-09-07T08:29:17.0369818Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:17.0370068Z env: 2025-09-07T08:29:17.0370233Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:17.0370541Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:17.0370856Z DEVICE_NAME: 2025-09-07T08:29:17.0371018Z DEVICE_TYPE: 2025-09-07T08:29:17.0371175Z ##[endgroup] 2025-09-07T08:29:17.0393554Z workflow_id: 17525270809 2025-09-07T08:29:17.0394027Z workflow_attempt: 1 2025-09-07T08:29:17.0394310Z workflow_Name: inductor-nightly 2025-09-07T08:29:17.0394627Z job_id: 49775559413 2025-09-07T08:29:17.0395672Z job_name: nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx) 2025-09-07T08:29:17.0396428Z artifact_prefix: 2025-09-07T08:29:17.0405750Z Python 3.9.23 2025-09-07T08:29:17.0435278Z ##[group]Run nick-fields/retry@v3.0.0 2025-09-07T08:29:17.0435508Z with: 2025-09-07T08:29:17.0435665Z shell: bash 2025-09-07T08:29:17.0435839Z timeout_minutes: 5 2025-09-07T08:29:17.0436019Z max_attempts: 5 2025-09-07T08:29:17.0436210Z retry_wait_seconds: 30 2025-09-07T08:29:17.0436579Z command: set -eu python3 -m pip install python-dateutil==2.8.2 boto3==1.35.42 pandas==2.1.3 dataclasses_json==0.6.7 2025-09-07T08:29:17.0436965Z polling_interval_seconds: 1 2025-09-07T08:29:17.0437178Z warning_on_retry: true 2025-09-07T08:29:17.0437369Z continue_on_error: false 2025-09-07T08:29:17.0437553Z env: 2025-09-07T08:29:17.0437717Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:17.0438041Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:17.0438383Z DEVICE_NAME: 2025-09-07T08:29:17.0438554Z DEVICE_TYPE: 2025-09-07T08:29:17.0438720Z ##[endgroup] 2025-09-07T08:29:17.3030714Z Defaulting to user installation because normal site-packages is not writeable 2025-09-07T08:29:17.3588554Z Collecting python-dateutil==2.8.2 2025-09-07T08:29:17.3887837Z Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB) 2025-09-07T08:29:18.0903356Z Collecting boto3==1.35.42 2025-09-07T08:29:18.1003913Z Downloading boto3-1.35.42-py3-none-any.whl (139 kB) 2025-09-07T08:29:18.4797829Z Collecting pandas==2.1.3 2025-09-07T08:29:18.4892639Z Downloading pandas-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.3 MB) 2025-09-07T08:29:18.5872229Z Requirement already satisfied: dataclasses_json==0.6.7 in /home/ec2-user/.local/lib/python3.9/site-packages (0.6.7) 2025-09-07T08:29:18.5883254Z Requirement already satisfied: six>=1.5 in /usr/lib/python3.9/site-packages (from python-dateutil==2.8.2) (1.15.0) 2025-09-07T08:29:18.5915097Z Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.0) 2025-09-07T08:29:18.5915753Z Requirement already satisfied: s3transfer<0.11.0,>=0.10.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (0.10.4) 2025-09-07T08:29:18.5924636Z Requirement already satisfied: botocore<1.36.0,>=1.35.42 in /home/ec2-user/.local/lib/python3.9/site-packages (from boto3==1.35.42) (1.35.99) 2025-09-07T08:29:18.6591484Z Collecting tzdata>=2022.1 2025-09-07T08:29:18.6684597Z Downloading tzdata-2025.2-py2.py3-none-any.whl (347 kB) 2025-09-07T08:29:19.2539128Z Collecting numpy<2,>=1.22.4 2025-09-07T08:29:19.2633914Z Downloading numpy-1.26.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB) 2025-09-07T08:29:19.3806497Z Requirement already satisfied: pytz>=2020.1 in /usr/lib/python3.9/site-packages (from pandas==2.1.3) (2022.7.1) 2025-09-07T08:29:19.3846222Z Requirement already satisfied: typing-inspect<1,>=0.4.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from dataclasses_json==0.6.7) (0.9.0) 2025-09-07T08:29:19.3847996Z Requirement already satisfied: marshmallow<4.0.0,>=3.18.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from dataclasses_json==0.6.7) (3.26.1) 2025-09-07T08:29:19.3889743Z Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/lib/python3.9/site-packages (from botocore<1.36.0,>=1.35.42->boto3==1.35.42) (1.25.10) 2025-09-07T08:29:19.3979452Z Requirement already satisfied: packaging>=17.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from marshmallow<4.0.0,>=3.18.0->dataclasses_json==0.6.7) (25.0) 2025-09-07T08:29:19.4048473Z Requirement already satisfied: mypy-extensions>=0.3.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from typing-inspect<1,>=0.4.0->dataclasses_json==0.6.7) (1.1.0) 2025-09-07T08:29:19.4053443Z Requirement already satisfied: typing-extensions>=3.7.4 in /home/ec2-user/.local/lib/python3.9/site-packages (from typing-inspect<1,>=0.4.0->dataclasses_json==0.6.7) (4.15.0) 2025-09-07T08:29:19.5523608Z Installing collected packages: python-dateutil, tzdata, numpy, pandas, boto3 2025-09-07T08:29:23.4109309Z Attempting uninstall: boto3 2025-09-07T08:29:23.4116823Z Found existing installation: boto3 1.35.33 2025-09-07T08:29:23.4178815Z Uninstalling boto3-1.35.33: 2025-09-07T08:29:23.4184638Z Successfully uninstalled boto3-1.35.33 2025-09-07T08:29:23.4591697Z Successfully installed boto3-1.35.42 numpy-1.26.4 pandas-2.1.3 python-dateutil-2.8.2 tzdata-2025.2 2025-09-07T08:29:24.1125550Z Command completed after 1 attempt(s). 2025-09-07T08:29:24.1172334Z ##[group]Run python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-09-07T08:29:24.1172793Z python3 -m tools.stats.upload_utilization_stats.upload_utilization_stats \ 2025-09-07T08:29:24.1173110Z  --workflow-run-id "17525270809" \ 2025-09-07T08:29:24.1173350Z  --workflow-name "inductor-nightly" \ 2025-09-07T08:29:24.1173588Z  --workflow-run-attempt "1" \ 2025-09-07T08:29:24.1173802Z  --job-id "49775559413" \ 2025-09-07T08:29:24.1174253Z  --job-name "nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx)" \ 2025-09-07T08:29:24.1174688Z  --local-path "" \ 2025-09-07T08:29:24.1174887Z  --artifact-prefix "" 2025-09-07T08:29:24.1179742Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:24.1180086Z env: 2025-09-07T08:29:24.1180245Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:24.1180548Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:24.1180858Z DEVICE_NAME: 2025-09-07T08:29:24.1181256Z DEVICE_TYPE: 2025-09-07T08:29:24.1181419Z ##[endgroup] 2025-09-07T08:29:25.0313240Z repo: pytorch/pytorch 2025-09-07T08:29:25.0319378Z Search for test log in s3 bucket: ossci-utilization 2025-09-07T08:29:25.0321989Z Downloading logs-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:25.0322667Z extracting usage_log.txt from zip file logs-test-dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface-1-1-linux.8xlarge.amx_49775559413.zip 2025-09-07T08:29:25.0323168Z Converted Log Model: UtilizationMetadata: 2025-09-07T08:29:25.0324167Z UtilizationMetadata(level='metadata', workflow_id='17525270809', job_id='49775559413', workflow_name='inductor-nightly', job_name='nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx)', usage_collect_interval=1.0, data_model_version=1.5, start_at=1757231188, gpu_count=0, cpu_count=32, gpu_type=None, error=None) 2025-09-07T08:29:25.0325184Z [Db Segments] detected pytest cmd: 10, generated segments: 10 2025-09-07T08:29:25.0325454Z [db model] Peek db timeseries 2025-09-07T08:29:25.0325647Z :{ 2025-09-07T08:29:25.0325803Z "created_at": 1757233764, 2025-09-07T08:29:25.0326005Z "type": "utilization", 2025-09-07T08:29:25.0326189Z "tags": [ 2025-09-07T08:29:25.0326355Z "record" 2025-09-07T08:29:25.0326527Z ], 2025-09-07T08:29:25.0326691Z "time_stamp": 1757231188, 2025-09-07T08:29:25.0326891Z "repo": "pytorch/pytorch", 2025-09-07T08:29:25.0327359Z "workflow_id": 17525270809, 2025-09-07T08:29:25.0327567Z "run_attempt": 1, 2025-09-07T08:29:25.0327755Z "job_id": 49775559413, 2025-09-07T08:29:25.0327963Z "workflow_name": "inductor-nightly", 2025-09-07T08:29:25.0328408Z "job_name": "nightly-dynamo-benchmarks-test / test (dynamic_cpu_max_autotune_inductor_amp_freezing_huggingface, 1, 1, linux.8xlarge.amx)", 2025-09-07T08:29:25.0328813Z "json_data": "{}" 2025-09-07T08:29:25.0328972Z } 2025-09-07T08:29:25.0329277Z Writing 1 documents to S3 ossci-utilization/util_metadata/v_1.5/pytorch/pytorch/17525270809/1/49775559413/metadata 2025-09-07T08:29:25.0329802Z Done! Finish writing document to S3 ossci-utilization/util_metadata/v_1.5/pytorch/pytorch/17525270809/1/49775559413/metadata 2025-09-07T08:29:25.0330340Z Writing 508 documents to S3 ossci-utilization/util_timeseries/v_1.5/pytorch/pytorch/17525270809/1/49775559413/time_series 2025-09-07T08:29:25.0331220Z Done! Finish writing document to S3 ossci-utilization/util_timeseries/v_1.5/pytorch/pytorch/17525270809/1/49775559413/time_series 2025-09-07T08:29:25.1387542Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main 2025-09-07T08:29:25.1387857Z with: 2025-09-07T08:29:25.1388019Z env: 2025-09-07T08:29:25.1388189Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:25.1388502Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:25.1388826Z DEVICE_NAME: 2025-09-07T08:29:25.1388999Z DEVICE_TYPE: 2025-09-07T08:29:25.1389164Z ##[endgroup] 2025-09-07T08:29:25.1401656Z ##[group]Run set -eou pipefail 2025-09-07T08:29:25.1402083Z set -eou pipefail 2025-09-07T08:29:25.1402287Z  2025-09-07T08:29:25.1402558Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2025-09-07T08:29:25.1402868Z for _ in $(seq 1440); do 2025-09-07T08:29:25.1403131Z  # Break if no ssh session exists anymore 2025-09-07T08:29:25.1403380Z  if [ "$(who)" = "" ]; then 2025-09-07T08:29:25.1403592Z  break 2025-09-07T08:29:25.1403793Z  fi 2025-09-07T08:29:25.1403968Z  echo "." 2025-09-07T08:29:25.1404213Z  sleep 5 2025-09-07T08:29:25.1404389Z done 2025-09-07T08:29:25.1409076Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:25.1409392Z env: 2025-09-07T08:29:25.1409547Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:25.1409839Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:25.1410144Z DEVICE_NAME: 2025-09-07T08:29:25.1410302Z DEVICE_TYPE: 2025-09-07T08:29:25.1410448Z ##[endgroup] 2025-09-07T08:29:25.1432514Z Holding runner for 2 hours until all ssh sessions have logged out 2025-09-07T08:29:25.1506074Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2025-09-07T08:29:25.1506445Z # ignore expansion of "docker ps -q" since it could be empty 2025-09-07T08:29:25.1506718Z # shellcheck disable=SC2046 2025-09-07T08:29:25.1506957Z docker stop $(docker ps -q) || true 2025-09-07T08:29:25.1507180Z # Prune all of the docker images 2025-09-07T08:29:25.1507403Z docker system prune -af 2025-09-07T08:29:25.1511420Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:25.1511656Z env: 2025-09-07T08:29:25.1511809Z GIT_DEFAULT_BRANCH: main 2025-09-07T08:29:25.1512105Z DOCKER_CONTAINER_ID: f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:25.1512424Z DEVICE_NAME: 2025-09-07T08:29:25.1512589Z DEVICE_TYPE: 2025-09-07T08:29:25.1512747Z ##[endgroup] 2025-09-07T08:29:35.9925424Z f01c480268d5 2025-09-07T08:29:36.2862484Z Deleted Containers: 2025-09-07T08:29:36.2862869Z f01c480268d5ddbab549c968042e97b0710cdd3f33883b3d84bb8ef449f304dc 2025-09-07T08:29:36.2863127Z 2025-09-07T08:29:43.3341198Z Deleted Images: 2025-09-07T08:29:43.3342554Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image:pytorch-linux-jammy-py3-gcc11-inductor-benchmarks-ae53c6842aa4c2407d0ad976491ca941c2635c77 2025-09-07T08:29:43.3344933Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/ci-image@sha256:383efb45082f20b8c808cb0ba4df693a01359592233f641f1f486911ac320a9a 2025-09-07T08:29:43.3345791Z deleted: sha256:662d8c9dfc7db2f5d004293de4f2b7647941dee4c916479ef082d17fcdfd9c47 2025-09-07T08:29:43.3346227Z deleted: sha256:ea5ad443c754124b3a5a209c2663376b4c156947edef1b982a336148bbf9114d 2025-09-07T08:29:43.3346645Z deleted: sha256:284be7504f072e0c04da4e2190e8d0e1de73835ed67be81f3ddd7eafd5d06a3a 2025-09-07T08:29:43.3347096Z deleted: sha256:2f49ff4be65f7ca55de8d7028fb3df7d08232a9f043aa7ba27d9393724286281 2025-09-07T08:29:43.3347529Z deleted: sha256:f63b503fdd1cca198aecefb9eef7ffbeb5fbc723f2a8462f50316e56cd403cbc 2025-09-07T08:29:43.3347959Z deleted: sha256:f9d46e08457013f0e71d608ac3dd95b79c41120060a80baefa684048cc15574e 2025-09-07T08:29:43.3352467Z deleted: sha256:cab76e28615751b6d6a703103b1da790a67cb3a4ee2e8814de51de18ff8b595d 2025-09-07T08:29:43.3353213Z deleted: sha256:0b2d09aa482371591a32563a5db71472822abd096a347967a9bd2a177737109f 2025-09-07T08:29:43.3353638Z deleted: sha256:d306d346d5da05e9fd04284304b1637a0bf01ee97397c688d19d783d5e133de9 2025-09-07T08:29:43.3354084Z deleted: sha256:bb3381a916d410a6e304540bb0796099dc780cd11f5829e734b337e0e79acfe4 2025-09-07T08:29:43.3354494Z deleted: sha256:bcf487c27e826c092985285163fb896e3324460b1774f3eb2a66623cd31e7d87 2025-09-07T08:29:43.3354902Z deleted: sha256:7d13485a9bdc5c0e64ac5085b25f4dded75c60f74090369c1b6f3f546ee37e94 2025-09-07T08:29:43.3355319Z deleted: sha256:55351d98a4197542fa7c78089671f447a6ef88cc554b7fad4fc522e8d4d187b6 2025-09-07T08:29:43.3355736Z deleted: sha256:f884bc0c4f9a994f3b3f1d82205f3a7014b05c84ad0c1c2fa3254d15a44f31e1 2025-09-07T08:29:43.3356188Z deleted: sha256:cdd16785a15239e518604ea9ea31405d5225fa6411d1c6d74d6523bcebf759ab 2025-09-07T08:29:43.3356605Z deleted: sha256:2c5bc1dc49446d7df5784578ae7c99460a93b502aa0c3b9deffbb95ec5216860 2025-09-07T08:29:43.3357032Z deleted: sha256:bae1e956be98416ce7d1a6c2c6ef0917f467238e19291786f8e1fed36fa81956 2025-09-07T08:29:43.3357447Z deleted: sha256:2cb1f002ab1126b0606999a9557b3f7f5da1e453d5376d29d95d60a979a215c4 2025-09-07T08:29:43.3357943Z deleted: sha256:25055a5f67b9bce8fac50ee1508dcb0f862ed154de5ded734e55f60edaca385f 2025-09-07T08:29:43.3358363Z deleted: sha256:98024e2dd34a5899240e41ae14f59c657cdc005040773e6ad7cfe3d67cdac7a8 2025-09-07T08:29:43.3358799Z deleted: sha256:8d2e75659096b4af8a20c3e9a6cce899b6e720f638eacdfd7d41ec8a736efdde 2025-09-07T08:29:43.3359226Z deleted: sha256:7741a6bf043548509c51c32e44734f30dfe07f91ca56c64422b004c3c0444e68 2025-09-07T08:29:43.3359642Z deleted: sha256:e2e63edbd2512e413c388888eabade05a2a7876adf20e7f0e0c3660ac3acbd3d 2025-09-07T08:29:43.3360076Z deleted: sha256:7fdea0f7711ee22084f87dc6d651598b5e5c5237de828105f698cb6a937d4c9c 2025-09-07T08:29:43.3360501Z deleted: sha256:486a2cf42f9492f291d59d48f3cec5a0a72449d8b6ad7d7a02596da237cdd154 2025-09-07T08:29:43.3360918Z deleted: sha256:a17da64c93a4939fad81a3ff6b6cb30f988176a6e0062fcf9c65e06cd9b9c3fb 2025-09-07T08:29:43.3361357Z deleted: sha256:70b4a3a917b8f95b19ae5dab6f404af8fa1c886022e4a1d785654013d5d876af 2025-09-07T08:29:43.3361789Z deleted: sha256:bd1b9d6a8aa636a67023800dcd85e4a3a7a7a21d65c6e6491d169fa65b4404a9 2025-09-07T08:29:43.3362224Z deleted: sha256:e3befcf3d3693c1d7bf0535e6e6722f0aabb0123805443ef5915dd5441ed0b00 2025-09-07T08:29:43.3362656Z deleted: sha256:4b4f846f1c4266b015f5fdf8dac5346c083c3aee2375e337172c112677c5a8c0 2025-09-07T08:29:43.3363091Z deleted: sha256:f05dc4d1350267b90e07af241a64f86a928fb3d8de75717ac04ec5a0433d042f 2025-09-07T08:29:43.3363520Z deleted: sha256:b6b4de696915fa2db09844ec9ac44dbb2940b655cd356404cf1ff03eec644dad 2025-09-07T08:29:43.3363960Z deleted: sha256:da008bbe1fc29cb35b3949040e97eb801f3264a56c4dd1b9d43a3cb54f2a39b2 2025-09-07T08:29:43.3364430Z deleted: sha256:261da5d14cad99ee11dcdaeb6055726f38fc12b7c559ee9c6d2ddc3f288f4828 2025-09-07T08:29:43.3364865Z deleted: sha256:16f900c60e70d685a85ca571ee0dada993a02217bdd6bb8b1d49169e7e28cf41 2025-09-07T08:29:43.3365306Z deleted: sha256:f57b18c5cde1d1dc553a15e1e98141d4afc0b4d0bb1182cc85b2c21bd18bb783 2025-09-07T08:29:43.3365733Z deleted: sha256:3c79105088ac60b231e4553752ee42cb6a87f9d32736b32f0c2123dddec724e7 2025-09-07T08:29:43.3366167Z deleted: sha256:df1ffff478908236efb6ceb8e05e6e078f12b864f4d24ce598cba7b961fad65c 2025-09-07T08:29:43.3366597Z deleted: sha256:8170255b562b59b76768f18a5b84b1ba887db93d3fe43b87a74bdc6be4f82014 2025-09-07T08:29:43.3367342Z deleted: sha256:c863cfe6bed704be5a54617331e27158b6f5a492dd6b9ed9c99d23db017cf5e1 2025-09-07T08:29:43.3367788Z deleted: sha256:e9e5a98c073f72c3abf9cc98724a31a3791535574ac78aeda7eb5df4580b21d0 2025-09-07T08:29:43.3368220Z deleted: sha256:0a42ac98735ca6578911218be7a7918001fe8aee1eb33d98f0d0a153d0e1102d 2025-09-07T08:29:43.3368652Z deleted: sha256:77d5a8aaa4d0fe1210dda9ac1f0fa3cf6141fea925b6240b9839d7505d021d3f 2025-09-07T08:29:43.3369126Z deleted: sha256:fa6ec46c43532dc01449df1cc403de8bb5872f859076e90658534c51c1487ef9 2025-09-07T08:29:43.3369630Z deleted: sha256:424a12dd5083283e19af48d31b7f2e33911ca8f459796f17280eaf5777a9aa25 2025-09-07T08:29:43.3370053Z deleted: sha256:8f0499601e14f1073e20ce889b45d12ab33264f9cf30359ac29dddbf58a311aa 2025-09-07T08:29:43.3370513Z deleted: sha256:5a5fae32dfb81abcd7bf374018b11e8e42a5aa39841d4b94e822d306c9af015b 2025-09-07T08:29:43.3370938Z deleted: sha256:d1bda89f22d383d38dfb7f7590b3bb202ccb91814034e7c7e2493306a10151ef 2025-09-07T08:29:43.3371372Z deleted: sha256:dbf16c1fcae146528685a8f745f9c505b24ba9ef009c42b1bd711ff7bf51b936 2025-09-07T08:29:43.3371794Z deleted: sha256:f9ec0065788f638325536a37427e2635b760a32457f20ca0acbcef6946b1041b 2025-09-07T08:29:43.3372206Z deleted: sha256:9d9911dac8fb2ff7db87329f38625d73f452dfef8822830048bbc00541c7df14 2025-09-07T08:29:43.3372839Z deleted: sha256:de4c1937129850e357b0de484d230569f628ac0bc883b12eff42932cd1e193ce 2025-09-07T08:29:43.3373286Z deleted: sha256:7b3c9e5b56a1d74226a5c1a54e5cb5e749012aa9b1d2376c6e7503757e29c35b 2025-09-07T08:29:43.3373703Z deleted: sha256:8062a6f28fc5fe2a199e1c1c40b6c43b7e29eb0c452492b47ec6900413b19cb6 2025-09-07T08:29:43.3374133Z deleted: sha256:f879aeffe6886f8da80462b571f9307aa63bb961645bec55ff579187a81cfd0b 2025-09-07T08:29:43.3374599Z deleted: sha256:5c6ef06b3536a430194aee509a784ee889c4a9d6248cb20fd9290e87e4ee2245 2025-09-07T08:29:43.3375030Z deleted: sha256:461aea034a25a2d72be6adfe9213c457c4cbf48724e9cb1c57987afb87668f21 2025-09-07T08:29:43.3375463Z deleted: sha256:e342cd1c71b7d0b024ea16b4a11f3f7fbbc2e3d11ef754c9d242aa50c4f8b0a3 2025-09-07T08:29:43.3375884Z deleted: sha256:bffd35a7fa1ddcfe05f79b7d3cae4180928eeea00eaab7ed7f484bc31adfc1d5 2025-09-07T08:29:43.3376275Z deleted: sha256:b34e33e7b04b5cbb5d5852199430593bfa18ddfe9081df42284230a14ebb739e 2025-09-07T08:29:43.3376672Z deleted: sha256:21d9b55338774d9ddc66d0bfcc92af9c8d2ecd94d1710b7049f5a811e411af7b 2025-09-07T08:29:43.3377081Z deleted: sha256:6cc2b33909585d17bf269fb8297ff881249e136137254734f7d23b9583208718 2025-09-07T08:29:43.3377468Z deleted: sha256:ca7f55b7c6d6cb11ddd8e187da34c2695fc2ce7655d652b9c9dc140a01ed056f 2025-09-07T08:29:43.3377868Z deleted: sha256:a3ece3d0ab6e99ef783c4f8d27d0e38504ab4477590ef556c16d22d92ba63a43 2025-09-07T08:29:43.3378255Z deleted: sha256:c137b0d41177c753aa1b69b11d0dd1f82420bf8520371866c845b53dca10b2d0 2025-09-07T08:29:43.3378642Z deleted: sha256:1e0d92b07bce12e511af59f608edd1932b10704d700f5e7538e406b90ecbb615 2025-09-07T08:29:43.3379014Z deleted: sha256:2ec3d01b3031e9da124d67410f54866ec5c679a0d6e4aee6b31608c45ce7fd77 2025-09-07T08:29:43.3379378Z deleted: sha256:308cffbd71363688c672b2043c6b9bf647cfb84593c42c3d88e3f36ee8f7f1b4 2025-09-07T08:29:43.3379748Z deleted: sha256:d965d9873fa450daba50a85d961f0835b14374167d84cfafa6060d16229f4229 2025-09-07T08:29:43.3380119Z deleted: sha256:effd997e222f62a34133bb2ecf9c0ffee151e5797f72e734d86a270d2e722374 2025-09-07T08:29:43.3380517Z deleted: sha256:0bbc1c78c10ee09c2697cfcce347dc9edbf82a7ccc25a6db6ee0a8dda398f7f2 2025-09-07T08:29:43.3380906Z deleted: sha256:214858e773d1ad73c2965c19b29cbfd3e2a974daa879163e1c1eb96567a7ee06 2025-09-07T08:29:43.3381298Z deleted: sha256:a9c7a2cd7ae229b26e84c093de657d0f4334d6cc9301991c6c3245ff62a9a71d 2025-09-07T08:29:43.3381682Z deleted: sha256:749a80551ef3f272e2517cb065bc7a5250da47d0b36bf74ed453caa9a5fee265 2025-09-07T08:29:43.3382061Z deleted: sha256:39b014c4e62d21c11df6c6d775d3f345675014292198981f455bacc4515a0f7b 2025-09-07T08:29:43.3382439Z deleted: sha256:0f087c9a894566644f825f5f87308d92e4cf149c51f7cd4769cbfaeefd3df791 2025-09-07T08:29:43.3382806Z deleted: sha256:dc6eb6dad5f9e332f00af553440e857b1467db1be43dd910cdb6830ba0898d50 2025-09-07T08:29:43.3383039Z 2025-09-07T08:29:43.3383132Z Total reclaimed space: 52.84GB 2025-09-07T08:29:43.3459729Z Post job cleanup. 2025-09-07T08:29:43.3489226Z Post job cleanup. 2025-09-07T08:29:43.4252826Z [command]/usr/bin/git version 2025-09-07T08:29:43.4295919Z git version 2.47.1 2025-09-07T08:29:43.4323585Z Copying '/home/ec2-user/.gitconfig' to '/home/ec2-user/actions-runner/_work/_temp/0617cca5-8233-4b95-9e6b-065b2e9b58b2/.gitconfig' 2025-09-07T08:29:43.4333342Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/0617cca5-8233-4b95-9e6b-065b2e9b58b2' before making global git config changes 2025-09-07T08:29:43.4333932Z Adding repository directory to the temporary git global config as a safe directory 2025-09-07T08:29:43.4337773Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2025-09-07T08:29:43.4378806Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2025-09-07T08:29:43.4416895Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2025-09-07T08:29:43.4722618Z Entering 'android/libs/fbjni' 2025-09-07T08:29:43.4780552Z Entering 'third_party/FP16' 2025-09-07T08:29:43.4829268Z Entering 'third_party/FXdiv' 2025-09-07T08:29:43.4888699Z Entering 'third_party/NNPACK' 2025-09-07T08:29:43.4941798Z Entering 'third_party/NVTX' 2025-09-07T08:29:43.4996198Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T08:29:43.5046311Z Entering 'third_party/XNNPACK' 2025-09-07T08:29:43.5118857Z Entering 'third_party/aiter' 2025-09-07T08:29:43.5166131Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T08:29:43.5226971Z Entering 'third_party/benchmark' 2025-09-07T08:29:43.5281443Z Entering 'third_party/composable_kernel' 2025-09-07T08:29:43.5342810Z Entering 'third_party/cpp-httplib' 2025-09-07T08:29:43.5400743Z Entering 'third_party/cpuinfo' 2025-09-07T08:29:43.5466533Z Entering 'third_party/cudnn_frontend' 2025-09-07T08:29:43.5522045Z Entering 'third_party/cutlass' 2025-09-07T08:29:43.5587073Z Entering 'third_party/fbgemm' 2025-09-07T08:29:43.5638718Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T08:29:43.5695430Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T08:29:43.5748879Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T08:29:43.5801105Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T08:29:43.5861618Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T08:29:43.5919178Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T08:29:43.5976520Z Entering 'third_party/fbgemm/external/json' 2025-09-07T08:29:43.6034861Z Entering 'third_party/flash-attention' 2025-09-07T08:29:43.6087255Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T08:29:43.6142347Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T08:29:43.6207392Z Entering 'third_party/flatbuffers' 2025-09-07T08:29:43.6264967Z Entering 'third_party/fmt' 2025-09-07T08:29:43.6322150Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T08:29:43.6382818Z Entering 'third_party/gloo' 2025-09-07T08:29:43.6439611Z Entering 'third_party/googletest' 2025-09-07T08:29:43.6497407Z Entering 'third_party/ideep' 2025-09-07T08:29:43.6546808Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T08:29:43.6608740Z Entering 'third_party/ittapi' 2025-09-07T08:29:43.6665505Z Entering 'third_party/kineto' 2025-09-07T08:29:43.6718020Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T08:29:43.6766718Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T08:29:43.6820798Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T08:29:43.6884141Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T08:29:43.6932883Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T08:29:43.6992081Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T08:29:43.7043407Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T08:29:43.7107484Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T08:29:43.7160964Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T08:29:43.7218546Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T08:29:43.7271858Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T08:29:43.7332983Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T08:29:43.7390398Z Entering 'third_party/kleidiai' 2025-09-07T08:29:43.7439677Z Entering 'third_party/mimalloc' 2025-09-07T08:29:43.7497326Z Entering 'third_party/nlohmann' 2025-09-07T08:29:43.7555721Z Entering 'third_party/onnx' 2025-09-07T08:29:43.7622059Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T08:29:43.7684089Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T08:29:43.7738109Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T08:29:43.7795821Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T08:29:43.7844016Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T08:29:43.7894017Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T08:29:43.7944002Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T08:29:43.7998914Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T08:29:43.8052078Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T08:29:43.8109171Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T08:29:43.8162737Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T08:29:43.8224344Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T08:29:43.8297442Z Entering 'third_party/pocketfft' 2025-09-07T08:29:43.8345748Z Entering 'third_party/protobuf' 2025-09-07T08:29:43.8402050Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T08:29:43.8461630Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T08:29:43.8516858Z Entering 'third_party/psimd' 2025-09-07T08:29:43.8576359Z Entering 'third_party/pthreadpool' 2025-09-07T08:29:43.8629209Z Entering 'third_party/pybind11' 2025-09-07T08:29:43.8687520Z Entering 'third_party/python-peachpy' 2025-09-07T08:29:43.8743167Z Entering 'third_party/sleef' 2025-09-07T08:29:43.8800338Z Entering 'third_party/tensorpipe' 2025-09-07T08:29:43.8857066Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T08:29:43.8913717Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T08:29:43.8964723Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T08:29:43.9018213Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T08:29:43.9077248Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T08:29:43.9160225Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2025-09-07T08:29:43.9188952Z http.https://github.com/.extraheader 2025-09-07T08:29:43.9202407Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2025-09-07T08:29:43.9235116Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2025-09-07T08:29:43.9545612Z Entering 'android/libs/fbjni' 2025-09-07T08:29:43.9579813Z http.https://github.com/.extraheader 2025-09-07T08:29:43.9620530Z Entering 'third_party/FP16' 2025-09-07T08:29:43.9657004Z http.https://github.com/.extraheader 2025-09-07T08:29:43.9688059Z Entering 'third_party/FXdiv' 2025-09-07T08:29:43.9724102Z http.https://github.com/.extraheader 2025-09-07T08:29:43.9763196Z Entering 'third_party/NNPACK' 2025-09-07T08:29:43.9792815Z http.https://github.com/.extraheader 2025-09-07T08:29:43.9829293Z Entering 'third_party/NVTX' 2025-09-07T08:29:43.9865060Z http.https://github.com/.extraheader 2025-09-07T08:29:43.9901158Z Entering 'third_party/VulkanMemoryAllocator' 2025-09-07T08:29:43.9934499Z http.https://github.com/.extraheader 2025-09-07T08:29:43.9970489Z Entering 'third_party/XNNPACK' 2025-09-07T08:29:44.0005708Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0056562Z Entering 'third_party/aiter' 2025-09-07T08:29:44.0093140Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0124497Z Entering 'third_party/aiter/3rdparty/composable_kernel' 2025-09-07T08:29:44.0161480Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0205463Z Entering 'third_party/benchmark' 2025-09-07T08:29:44.0238730Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0280544Z Entering 'third_party/composable_kernel' 2025-09-07T08:29:44.0315842Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0357816Z Entering 'third_party/cpp-httplib' 2025-09-07T08:29:44.0393045Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0425502Z Entering 'third_party/cpuinfo' 2025-09-07T08:29:44.0464744Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0498045Z Entering 'third_party/cudnn_frontend' 2025-09-07T08:29:44.0528551Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0563980Z Entering 'third_party/cutlass' 2025-09-07T08:29:44.0601263Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0645298Z Entering 'third_party/fbgemm' 2025-09-07T08:29:44.0682667Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0719890Z Entering 'third_party/fbgemm/external/asmjit' 2025-09-07T08:29:44.0752245Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0796462Z Entering 'third_party/fbgemm/external/composable_kernel' 2025-09-07T08:29:44.0830109Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0873618Z Entering 'third_party/fbgemm/external/cpuinfo' 2025-09-07T08:29:44.0908328Z http.https://github.com/.extraheader 2025-09-07T08:29:44.0942275Z Entering 'third_party/fbgemm/external/cutlass' 2025-09-07T08:29:44.0980840Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1020986Z Entering 'third_party/fbgemm/external/googletest' 2025-09-07T08:29:44.1056910Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1099054Z Entering 'third_party/fbgemm/external/hipify_torch' 2025-09-07T08:29:44.1129822Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1162472Z Entering 'third_party/fbgemm/external/json' 2025-09-07T08:29:44.1195101Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1232665Z Entering 'third_party/flash-attention' 2025-09-07T08:29:44.1270882Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1310410Z Entering 'third_party/flash-attention/csrc/composable_kernel' 2025-09-07T08:29:44.1344285Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1388707Z Entering 'third_party/flash-attention/csrc/cutlass' 2025-09-07T08:29:44.1421192Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1467728Z Entering 'third_party/flatbuffers' 2025-09-07T08:29:44.1501527Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1539421Z Entering 'third_party/fmt' 2025-09-07T08:29:44.1577866Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1613358Z Entering 'third_party/gemmlowp/gemmlowp' 2025-09-07T08:29:44.1652747Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1691082Z Entering 'third_party/gloo' 2025-09-07T08:29:44.1724370Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1760433Z Entering 'third_party/googletest' 2025-09-07T08:29:44.1794932Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1831018Z Entering 'third_party/ideep' 2025-09-07T08:29:44.1869695Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1904688Z Entering 'third_party/ideep/mkl-dnn' 2025-09-07T08:29:44.1938458Z http.https://github.com/.extraheader 2025-09-07T08:29:44.1983256Z Entering 'third_party/ittapi' 2025-09-07T08:29:44.2017648Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2056232Z Entering 'third_party/kineto' 2025-09-07T08:29:44.2087326Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2123254Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2025-09-07T08:29:44.2161488Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2196152Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2025-09-07T08:29:44.2228129Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2269248Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2025-09-07T08:29:44.2301034Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2335396Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2025-09-07T08:29:44.2370998Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2411335Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2025-09-07T08:29:44.2442688Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2481721Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2025-09-07T08:29:44.2517680Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2561685Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2025-09-07T08:29:44.2596705Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2632432Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2025-09-07T08:29:44.2668620Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2703411Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2025-09-07T08:29:44.2734731Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2772296Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2025-09-07T08:29:44.2803662Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2837983Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2025-09-07T08:29:44.2874044Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2909577Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2025-09-07T08:29:44.2941123Z http.https://github.com/.extraheader 2025-09-07T08:29:44.2991047Z Entering 'third_party/kleidiai' 2025-09-07T08:29:44.3023522Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3063703Z Entering 'third_party/mimalloc' 2025-09-07T08:29:44.3104250Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3131662Z Entering 'third_party/nlohmann' 2025-09-07T08:29:44.3167679Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3198387Z Entering 'third_party/onnx' 2025-09-07T08:29:44.3232245Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3285372Z Entering 'third_party/onnx/third_party/pybind11' 2025-09-07T08:29:44.3318291Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3362047Z Entering 'third_party/opentelemetry-cpp' 2025-09-07T08:29:44.3393637Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3423575Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2025-09-07T08:29:44.3457776Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3503949Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2025-09-07T08:29:44.3524679Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3559582Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2025-09-07T08:29:44.3595984Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3627617Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2025-09-07T08:29:44.3662496Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3694322Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2025-09-07T08:29:44.3730071Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3764156Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2025-09-07T08:29:44.3801977Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3835414Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2025-09-07T08:29:44.3879590Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3915720Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2025-09-07T08:29:44.3954881Z http.https://github.com/.extraheader 2025-09-07T08:29:44.3985609Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2025-09-07T08:29:44.4017443Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4059237Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2025-09-07T08:29:44.4095659Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4152778Z Entering 'third_party/pocketfft' 2025-09-07T08:29:44.4190695Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4229887Z Entering 'third_party/protobuf' 2025-09-07T08:29:44.4262852Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4300882Z Entering 'third_party/protobuf/third_party/benchmark' 2025-09-07T08:29:44.4332691Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4368730Z Entering 'third_party/protobuf/third_party/googletest' 2025-09-07T08:29:44.4408476Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4443305Z Entering 'third_party/psimd' 2025-09-07T08:29:44.4481142Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4514770Z Entering 'third_party/pthreadpool' 2025-09-07T08:29:44.4552871Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4589436Z Entering 'third_party/pybind11' 2025-09-07T08:29:44.4625767Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4662973Z Entering 'third_party/python-peachpy' 2025-09-07T08:29:44.4698009Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4732952Z Entering 'third_party/sleef' 2025-09-07T08:29:44.4774677Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4812823Z Entering 'third_party/tensorpipe' 2025-09-07T08:29:44.4845420Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4883876Z Entering 'third_party/tensorpipe/third_party/googletest' 2025-09-07T08:29:44.4918516Z http.https://github.com/.extraheader 2025-09-07T08:29:44.4953563Z Entering 'third_party/tensorpipe/third_party/libnop' 2025-09-07T08:29:44.4985698Z http.https://github.com/.extraheader 2025-09-07T08:29:44.5024400Z Entering 'third_party/tensorpipe/third_party/libuv' 2025-09-07T08:29:44.5058893Z http.https://github.com/.extraheader 2025-09-07T08:29:44.5095404Z Entering 'third_party/tensorpipe/third_party/pybind11' 2025-09-07T08:29:44.5131445Z http.https://github.com/.extraheader 2025-09-07T08:29:44.5164691Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2025-09-07T08:29:44.5202869Z http.https://github.com/.extraheader 2025-09-07T08:29:44.5345406Z A job completed hook has been configured by the self-hosted runner administrator 2025-09-07T08:29:44.5358767Z ##[group]Run '/home/ec2-user/runner-scripts/after_job.sh' 2025-09-07T08:29:44.5362165Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2025-09-07T08:29:44.5362445Z ##[endgroup] 2025-09-07T08:29:44.5447145Z [!ALERT!] Swap in detected! [!ALERT!] 2025-09-07T08:29:53.4086846Z [!ALERT!] Swap out detected [!ALERT!] 2025-09-07T08:30:08.4659180Z Cleaning up orphan processes